Test Report: QEMU_macOS 19283

                    
                      8d2418a61c606cc3028c5bf9242bf095ec458362:2024-07-17:35383
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.18
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.92
36 TestAddons/Setup 10.21
37 TestCertOptions 10.03
38 TestCertExpiration 195.17
39 TestDockerFlags 10.17
40 TestForceSystemdFlag 10.1
41 TestForceSystemdEnv 10.34
47 TestErrorSpam/setup 9.84
56 TestFunctional/serial/StartWithProxy 9.99
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.73
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.98
72 TestFunctional/serial/ExtraConfig 5.3
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.17
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.14
91 TestFunctional/parallel/CpCmd 0.29
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.28
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 106.63
109 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
110 TestFunctional/parallel/ServiceCmd/List 0.04
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
113 TestFunctional/parallel/ServiceCmd/Format 0.04
114 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/Version/components 0.04
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
127 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.27
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
136 TestFunctional/parallel/DockerEnv/bash 0.04
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 26.92
150 TestMultiControlPlane/serial/StartCluster 9.81
151 TestMultiControlPlane/serial/DeployApp 102.19
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.07
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 49.96
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.87
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 3.47
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.87
174 TestJSONOutput/start/Command 9.74
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.04
203 TestMinikubeProfile 10.09
206 TestMountStart/serial/StartWithMountFirst 9.86
209 TestMultiNode/serial/FreshStart2Nodes 9.82
210 TestMultiNode/serial/DeployApp2Nodes 112.07
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 56.32
218 TestMultiNode/serial/RestartKeepsNodes 8.78
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 3.09
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 19.97
226 TestPreload 9.94
228 TestScheduledStopUnix 10.04
229 TestSkaffold 12.11
232 TestRunningBinaryUpgrade 588.48
234 TestKubernetesUpgrade 17.05
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.28
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.23
250 TestStoppedBinaryUpgrade/Upgrade 574.83
252 TestPause/serial/Start 9.87
262 TestNoKubernetes/serial/StartWithK8s 9.82
263 TestNoKubernetes/serial/StartWithStopK8s 5.3
264 TestNoKubernetes/serial/Start 5.29
268 TestNoKubernetes/serial/StartNoArgs 5.28
270 TestNetworkPlugins/group/auto/Start 9.74
271 TestNetworkPlugins/group/calico/Start 9.8
272 TestNetworkPlugins/group/custom-flannel/Start 9.82
273 TestNetworkPlugins/group/false/Start 9.75
274 TestNetworkPlugins/group/kindnet/Start 9.79
275 TestNetworkPlugins/group/flannel/Start 9.87
276 TestNetworkPlugins/group/enable-default-cni/Start 9.78
277 TestNetworkPlugins/group/bridge/Start 10.11
278 TestNetworkPlugins/group/kubenet/Start 9.99
280 TestStartStop/group/old-k8s-version/serial/FirstStart 9.79
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 10.01
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/SecondStart 5.24
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 10.03
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 12.1
306 TestStartStop/group/embed-certs/serial/DeployApp 0.1
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
310 TestStartStop/group/embed-certs/serial/SecondStart 5.26
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.09
321 TestStartStop/group/newest-cni/serial/FirstStart 9.87
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.25
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-580000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-580000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.178125292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7fa4443e-969b-49e8-a38f-040d514c3664","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-580000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad4d157c-70cd-4c85-a997-e53b445cc95c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19283"}}
	{"specversion":"1.0","id":"90d399d6-0f64-4f0d-8b6a-d7239df6c973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig"}}
	{"specversion":"1.0","id":"0ab490c6-0771-4278-bcd9-0689e5031580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f2f5400c-93c9-45ef-9ff2-86969c05aae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"67e05026-c193-4e67-9719-b261c137d24e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube"}}
	{"specversion":"1.0","id":"d252a515-6226-492e-9931-7e296de9c62e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"98ac72ea-a860-4f73-ad07-62669686c3fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bc8b1a7-3e04-4963-8503-544d6e449d18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"1758f68b-5ad5-40fd-8491-c56ece0c1931","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8148746-603d-41f9-b58e-29aa0bae7842","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-580000\" primary control-plane node in \"download-only-580000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"896dbe8b-316c-4f1c-9fa0-101b0134a80d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"339ba8c6-81a8-4373-95b0-f5b9a187a47f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60] Decompressors:map[bz2:0x1400080d030 gz:0x1400080d038 tar:0x1400080cfe0 tar.bz2:0x1400080cff0 tar.gz:0x1400080d000 tar.xz:0x1400080d010 tar.zst:0x1400080d020 tbz2:0x1400080cff0 tgz:0x14
00080d000 txz:0x1400080d010 tzst:0x1400080d020 xz:0x1400080d040 zip:0x1400080d050 zst:0x1400080d048] Getters:map[file:0x14000886e10 http:0x140005d6190 https:0x140005d61e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"c156ca0e-240e-4b42-a6ce-deba0dddad56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:49:03.610039    7338 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:49:03.610193    7338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:03.610197    7338 out.go:304] Setting ErrFile to fd 2...
	I0717 10:49:03.610199    7338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:03.610315    7338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	W0717 10:49:03.610400    7338 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19283-6848/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19283-6848/.minikube/config/config.json: no such file or directory
	I0717 10:49:03.611657    7338 out.go:298] Setting JSON to true
	I0717 10:49:03.629060    7338 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4711,"bootTime":1721233832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:49:03.629135    7338 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:49:03.634767    7338 out.go:97] [download-only-580000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:49:03.634877    7338 notify.go:220] Checking for updates...
	W0717 10:49:03.634945    7338 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 10:49:03.638940    7338 out.go:169] MINIKUBE_LOCATION=19283
	I0717 10:49:03.647697    7338 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:49:03.661119    7338 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:49:03.663419    7338 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:49:03.666622    7338 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	W0717 10:49:03.672314    7338 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:49:03.672528    7338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:49:03.675610    7338 out.go:97] Using the qemu2 driver based on user configuration
	I0717 10:49:03.675629    7338 start.go:297] selected driver: qemu2
	I0717 10:49:03.675644    7338 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:49:03.675714    7338 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:49:03.679401    7338 out.go:169] Automatically selected the socket_vmnet network
	I0717 10:49:03.685403    7338 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0717 10:49:03.685493    7338 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:49:03.685570    7338 cni.go:84] Creating CNI manager for ""
	I0717 10:49:03.685589    7338 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 10:49:03.685649    7338 start.go:340] cluster config:
	{Name:download-only-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:49:03.689709    7338 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:49:03.694654    7338 out.go:97] Downloading VM boot image ...
	I0717 10:49:03.694671    7338 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso
	I0717 10:49:08.406985    7338 out.go:97] Starting "download-only-580000" primary control-plane node in "download-only-580000" cluster
	I0717 10:49:08.407019    7338 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:49:08.463029    7338 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 10:49:08.463054    7338 cache.go:56] Caching tarball of preloaded images
	I0717 10:49:08.463846    7338 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:49:08.468149    7338 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 10:49:08.468156    7338 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:08.547375    7338 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 10:49:13.645458    7338 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:13.645639    7338 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:14.341379    7338 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 10:49:14.341575    7338 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/download-only-580000/config.json ...
	I0717 10:49:14.341606    7338 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/download-only-580000/config.json: {Name:mk98ed7f00ff76b7ae93d12fd946317f6e852e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:49:14.342755    7338 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:49:14.342947    7338 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0717 10:49:14.707705    7338 out.go:169] 
	W0717 10:49:14.714813    7338 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60] Decompressors:map[bz2:0x1400080d030 gz:0x1400080d038 tar:0x1400080cfe0 tar.bz2:0x1400080cff0 tar.gz:0x1400080d000 tar.xz:0x1400080d010 tar.zst:0x1400080d020 tbz2:0x1400080cff0 tgz:0x1400080d000 txz:0x1400080d010 tzst:0x1400080d020 xz:0x1400080d040 zip:0x1400080d050 zst:0x1400080d048] Getters:map[file:0x14000886e10 http:0x140005d6190 https:0x140005d61e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0717 10:49:14.714837    7338 out_reason.go:110] 
	W0717 10:49:14.722704    7338 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:49:14.726614    7338 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-580000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-712000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-712000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.76832325s)

                                                
                                                
-- stdout --
	* [offline-docker-712000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-712000" primary control-plane node in "offline-docker-712000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-712000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:56.092650    9069 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:56.092817    9069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:56.092820    9069 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:56.092822    9069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:56.092953    9069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:00:56.094308    9069 out.go:298] Setting JSON to false
	I0717 11:00:56.111997    9069 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5424,"bootTime":1721233832,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:00:56.112084    9069 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:00:56.117663    9069 out.go:177] * [offline-docker-712000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:00:56.120688    9069 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:00:56.120701    9069 notify.go:220] Checking for updates...
	I0717 11:00:56.127616    9069 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:00:56.130554    9069 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:00:56.133622    9069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:00:56.136644    9069 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:00:56.139587    9069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:00:56.143015    9069 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:56.143064    9069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:00:56.146583    9069 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:00:56.153696    9069 start.go:297] selected driver: qemu2
	I0717 11:00:56.153708    9069 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:00:56.153715    9069 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:00:56.155722    9069 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:00:56.158606    9069 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:00:56.159848    9069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:00:56.159869    9069 cni.go:84] Creating CNI manager for ""
	I0717 11:00:56.159876    9069 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:00:56.159879    9069 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:00:56.159910    9069 start.go:340] cluster config:
	{Name:offline-docker-712000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-712000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:00:56.163611    9069 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:56.170616    9069 out.go:177] * Starting "offline-docker-712000" primary control-plane node in "offline-docker-712000" cluster
	I0717 11:00:56.174485    9069 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:00:56.174517    9069 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:00:56.174527    9069 cache.go:56] Caching tarball of preloaded images
	I0717 11:00:56.174596    9069 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:00:56.174602    9069 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:00:56.174671    9069 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/offline-docker-712000/config.json ...
	I0717 11:00:56.174686    9069 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/offline-docker-712000/config.json: {Name:mk844653ff04b37109b9fa9481aa0ac1e8344d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:00:56.174975    9069 start.go:360] acquireMachinesLock for offline-docker-712000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:00:56.175016    9069 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "offline-docker-712000"
	I0717 11:00:56.175028    9069 start.go:93] Provisioning new machine with config: &{Name:offline-docker-712000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-712000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:00:56.175063    9069 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:00:56.179634    9069 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:00:56.195697    9069 start.go:159] libmachine.API.Create for "offline-docker-712000" (driver="qemu2")
	I0717 11:00:56.195744    9069 client.go:168] LocalClient.Create starting
	I0717 11:00:56.195830    9069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:00:56.195859    9069 main.go:141] libmachine: Decoding PEM data...
	I0717 11:00:56.195871    9069 main.go:141] libmachine: Parsing certificate...
	I0717 11:00:56.195918    9069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:00:56.195940    9069 main.go:141] libmachine: Decoding PEM data...
	I0717 11:00:56.195951    9069 main.go:141] libmachine: Parsing certificate...
	I0717 11:00:56.196318    9069 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:00:56.330207    9069 main.go:141] libmachine: Creating SSH key...
	I0717 11:00:56.422662    9069 main.go:141] libmachine: Creating Disk image...
	I0717 11:00:56.422670    9069 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:00:56.427234    9069 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2
	I0717 11:00:56.439184    9069 main.go:141] libmachine: STDOUT: 
	I0717 11:00:56.439206    9069 main.go:141] libmachine: STDERR: 
	I0717 11:00:56.439270    9069 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2 +20000M
	I0717 11:00:56.447601    9069 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:00:56.447623    9069 main.go:141] libmachine: STDERR: 
	I0717 11:00:56.447639    9069 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2
	I0717 11:00:56.447643    9069 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:00:56.447654    9069 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:00:56.447684    9069 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:41:0c:df:21:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2
	I0717 11:00:56.449603    9069 main.go:141] libmachine: STDOUT: 
	I0717 11:00:56.449626    9069 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:00:56.449644    9069 client.go:171] duration metric: took 253.902375ms to LocalClient.Create
	I0717 11:00:58.451703    9069 start.go:128] duration metric: took 2.2766875s to createHost
	I0717 11:00:58.451724    9069 start.go:83] releasing machines lock for "offline-docker-712000", held for 2.276758875s
	W0717 11:00:58.451739    9069 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:58.461151    9069 out.go:177] * Deleting "offline-docker-712000" in qemu2 ...
	W0717 11:00:58.471571    9069 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:58.471582    9069 start.go:729] Will try again in 5 seconds ...
	I0717 11:01:03.473631    9069 start.go:360] acquireMachinesLock for offline-docker-712000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:03.474068    9069 start.go:364] duration metric: took 338.959µs to acquireMachinesLock for "offline-docker-712000"
	I0717 11:01:03.474240    9069 start.go:93] Provisioning new machine with config: &{Name:offline-docker-712000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:offline-docker-712000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:03.474514    9069 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:03.480221    9069 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:01:03.530399    9069 start.go:159] libmachine.API.Create for "offline-docker-712000" (driver="qemu2")
	I0717 11:01:03.530464    9069 client.go:168] LocalClient.Create starting
	I0717 11:01:03.530595    9069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:01:03.530658    9069 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:03.530676    9069 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:03.530743    9069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:01:03.530788    9069 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:03.530800    9069 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:03.531404    9069 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:03.674405    9069 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:03.769395    9069 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:03.769401    9069 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:03.769563    9069 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2
	I0717 11:01:03.778712    9069 main.go:141] libmachine: STDOUT: 
	I0717 11:01:03.778729    9069 main.go:141] libmachine: STDERR: 
	I0717 11:01:03.778776    9069 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2 +20000M
	I0717 11:01:03.786608    9069 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:03.786622    9069 main.go:141] libmachine: STDERR: 
	I0717 11:01:03.786632    9069 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2
	I0717 11:01:03.786637    9069 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:03.786650    9069 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:03.786676    9069 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e6:aa:20:ab:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/offline-docker-712000/disk.qcow2
	I0717 11:01:03.788266    9069 main.go:141] libmachine: STDOUT: 
	I0717 11:01:03.788280    9069 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:03.788294    9069 client.go:171] duration metric: took 257.830958ms to LocalClient.Create
	I0717 11:01:05.790437    9069 start.go:128] duration metric: took 2.315944958s to createHost
	I0717 11:01:05.790509    9069 start.go:83] releasing machines lock for "offline-docker-712000", held for 2.316459083s
	W0717 11:01:05.790929    9069 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-712000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-712000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:05.800425    9069 out.go:177] 
	W0717 11:01:05.805691    9069 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:01:05.805834    9069 out.go:239] * 
	* 
	W0717 11:01:05.808668    9069 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:01:05.817316    9069 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-712000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-17 11:01:05.834957 -0700 PDT m=+722.325158167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-712000 -n offline-docker-712000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-712000 -n offline-docker-712000: exit status 7 (65.540667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-712000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-712000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-712000
--- FAIL: TestOffline (9.92s)

                                                
                                    
x
+
TestAddons/Setup (10.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-914000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-914000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.209333416s)

                                                
                                                
-- stdout --
	* [addons-914000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-914000" primary control-plane node in "addons-914000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-914000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:49:29.260140    7452 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:49:29.260278    7452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:29.260282    7452 out.go:304] Setting ErrFile to fd 2...
	I0717 10:49:29.260284    7452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:29.260406    7452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:49:29.261566    7452 out.go:298] Setting JSON to false
	I0717 10:49:29.277457    7452 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4737,"bootTime":1721233832,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:49:29.277525    7452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:49:29.281804    7452 out.go:177] * [addons-914000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:49:29.288661    7452 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:49:29.288735    7452 notify.go:220] Checking for updates...
	I0717 10:49:29.295770    7452 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:49:29.297131    7452 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:49:29.299785    7452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:49:29.302804    7452 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:49:29.313339    7452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:49:29.315961    7452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:49:29.319778    7452 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 10:49:29.326775    7452 start.go:297] selected driver: qemu2
	I0717 10:49:29.326782    7452 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:49:29.326788    7452 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:49:29.329064    7452 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:49:29.331785    7452 out.go:177] * Automatically selected the socket_vmnet network
	I0717 10:49:29.334891    7452 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:49:29.334912    7452 cni.go:84] Creating CNI manager for ""
	I0717 10:49:29.334921    7452 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:49:29.334926    7452 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 10:49:29.334958    7452 start.go:340] cluster config:
	{Name:addons-914000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:49:29.338612    7452 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:49:29.346785    7452 out.go:177] * Starting "addons-914000" primary control-plane node in "addons-914000" cluster
	I0717 10:49:29.349763    7452 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:49:29.349778    7452 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:49:29.349790    7452 cache.go:56] Caching tarball of preloaded images
	I0717 10:49:29.349849    7452 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:49:29.349855    7452 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:49:29.350055    7452 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/addons-914000/config.json ...
	I0717 10:49:29.350072    7452 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/addons-914000/config.json: {Name:mk3e2ad0203a7cfd62f72e27211095f12e281016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:49:29.350408    7452 start.go:360] acquireMachinesLock for addons-914000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:49:29.350474    7452 start.go:364] duration metric: took 59.584µs to acquireMachinesLock for "addons-914000"
	I0717 10:49:29.350484    7452 start.go:93] Provisioning new machine with config: &{Name:addons-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:49:29.350518    7452 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:49:29.357680    7452 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 10:49:29.377180    7452 start.go:159] libmachine.API.Create for "addons-914000" (driver="qemu2")
	I0717 10:49:29.377229    7452 client.go:168] LocalClient.Create starting
	I0717 10:49:29.377351    7452 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 10:49:29.558849    7452 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 10:49:29.622957    7452 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:49:29.857517    7452 main.go:141] libmachine: Creating SSH key...
	I0717 10:49:29.894816    7452 main.go:141] libmachine: Creating Disk image...
	I0717 10:49:29.894827    7452 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:49:29.894989    7452 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2
	I0717 10:49:29.903994    7452 main.go:141] libmachine: STDOUT: 
	I0717 10:49:29.904012    7452 main.go:141] libmachine: STDERR: 
	I0717 10:49:29.904068    7452 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2 +20000M
	I0717 10:49:29.912102    7452 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:49:29.912128    7452 main.go:141] libmachine: STDERR: 
	I0717 10:49:29.912142    7452 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2
	I0717 10:49:29.912145    7452 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:49:29.912177    7452 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:49:29.912212    7452 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:62:e5:a9:ac:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2
	I0717 10:49:29.913834    7452 main.go:141] libmachine: STDOUT: 
	I0717 10:49:29.913850    7452 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:49:29.913869    7452 client.go:171] duration metric: took 536.647292ms to LocalClient.Create
	I0717 10:49:31.916029    7452 start.go:128] duration metric: took 2.565547917s to createHost
	I0717 10:49:31.916088    7452 start.go:83] releasing machines lock for "addons-914000", held for 2.565668125s
	W0717 10:49:31.916152    7452 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:49:31.929397    7452 out.go:177] * Deleting "addons-914000" in qemu2 ...
	W0717 10:49:31.956910    7452 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:49:31.956952    7452 start.go:729] Will try again in 5 seconds ...
	I0717 10:49:36.959127    7452 start.go:360] acquireMachinesLock for addons-914000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:49:36.959664    7452 start.go:364] duration metric: took 397.792µs to acquireMachinesLock for "addons-914000"
	I0717 10:49:36.959774    7452 start.go:93] Provisioning new machine with config: &{Name:addons-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:addons-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:49:36.960037    7452 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:49:36.969480    7452 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 10:49:37.019533    7452 start.go:159] libmachine.API.Create for "addons-914000" (driver="qemu2")
	I0717 10:49:37.019577    7452 client.go:168] LocalClient.Create starting
	I0717 10:49:37.019761    7452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 10:49:37.019845    7452 main.go:141] libmachine: Decoding PEM data...
	I0717 10:49:37.019870    7452 main.go:141] libmachine: Parsing certificate...
	I0717 10:49:37.019973    7452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 10:49:37.020022    7452 main.go:141] libmachine: Decoding PEM data...
	I0717 10:49:37.020037    7452 main.go:141] libmachine: Parsing certificate...
	I0717 10:49:37.020523    7452 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:49:37.158751    7452 main.go:141] libmachine: Creating SSH key...
	I0717 10:49:37.378485    7452 main.go:141] libmachine: Creating Disk image...
	I0717 10:49:37.378499    7452 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:49:37.378667    7452 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2
	I0717 10:49:37.387993    7452 main.go:141] libmachine: STDOUT: 
	I0717 10:49:37.388020    7452 main.go:141] libmachine: STDERR: 
	I0717 10:49:37.388079    7452 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2 +20000M
	I0717 10:49:37.396184    7452 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:49:37.396202    7452 main.go:141] libmachine: STDERR: 
	I0717 10:49:37.396217    7452 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2
	I0717 10:49:37.396223    7452 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:49:37.396234    7452 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:49:37.396265    7452 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:5a:da:53:de:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/addons-914000/disk.qcow2
	I0717 10:49:37.397949    7452 main.go:141] libmachine: STDOUT: 
	I0717 10:49:37.397963    7452 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:49:37.397977    7452 client.go:171] duration metric: took 378.40425ms to LocalClient.Create
	I0717 10:49:39.400201    7452 start.go:128] duration metric: took 2.44013475s to createHost
	I0717 10:49:39.400301    7452 start.go:83] releasing machines lock for "addons-914000", held for 2.440660209s
	W0717 10:49:39.400673    7452 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-914000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-914000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:49:39.410088    7452 out.go:177] 
	W0717 10:49:39.416277    7452 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:49:39.416330    7452 out.go:239] * 
	* 
	W0717 10:49:39.418809    7452 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:49:39.427122    7452 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-914000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.21s)

                                                
                                    
x
+
TestCertOptions (10.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-634000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-634000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.765677709s)

                                                
                                                
-- stdout --
	* [cert-options-634000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-634000" primary control-plane node in "cert-options-634000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-634000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-634000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-634000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-634000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.433417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-634000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-634000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-634000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-634000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-634000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.368ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-634000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-634000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-634000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-17 11:01:36.405436 -0700 PDT m=+752.896384501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-634000 -n cert-options-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-634000 -n cert-options-634000: exit status 7 (30.269583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-634000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-634000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-634000
--- FAIL: TestCertOptions (10.03s)

                                                
                                    
x
+
TestCertExpiration (195.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-095000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-095000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.7932335s)

                                                
                                                
-- stdout --
	* [cert-expiration-095000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-095000" primary control-plane node in "cert-expiration-095000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-095000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-095000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-095000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-095000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-095000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.218626792s)

                                                
                                                
-- stdout --
	* [cert-expiration-095000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-095000" primary control-plane node in "cert-expiration-095000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-095000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-095000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-095000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-095000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-095000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-095000" primary control-plane node in "cert-expiration-095000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-095000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-095000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-095000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-17 11:04:36.495188 -0700 PDT m=+932.935536084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-095000 -n cert-expiration-095000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-095000 -n cert-expiration-095000: exit status 7 (70.846041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-095000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-095000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-095000
--- FAIL: TestCertExpiration (195.17s)

                                                
                                    
x
+
TestDockerFlags (10.17s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-212000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-212000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.938392666s)

                                                
                                                
-- stdout --
	* [docker-flags-212000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-212000" primary control-plane node in "docker-flags-212000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-212000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:01:16.343906    9274 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:01:16.344039    9274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:01:16.344042    9274 out.go:304] Setting ErrFile to fd 2...
	I0717 11:01:16.344045    9274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:01:16.344216    9274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:01:16.345292    9274 out.go:298] Setting JSON to false
	I0717 11:01:16.361553    9274 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5444,"bootTime":1721233832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:01:16.361630    9274 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:01:16.369218    9274 out.go:177] * [docker-flags-212000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:01:16.378218    9274 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:01:16.378242    9274 notify.go:220] Checking for updates...
	I0717 11:01:16.382157    9274 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:01:16.385205    9274 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:01:16.388277    9274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:01:16.391250    9274 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:01:16.394174    9274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:01:16.397601    9274 config.go:182] Loaded profile config "force-systemd-flag-060000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:01:16.397673    9274 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:01:16.397722    9274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:01:16.402153    9274 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:01:16.409205    9274 start.go:297] selected driver: qemu2
	I0717 11:01:16.409213    9274 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:01:16.409220    9274 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:01:16.411624    9274 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:01:16.414160    9274 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:01:16.417275    9274 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0717 11:01:16.417315    9274 cni.go:84] Creating CNI manager for ""
	I0717 11:01:16.417322    9274 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:01:16.417327    9274 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:01:16.417363    9274 start.go:340] cluster config:
	{Name:docker-flags-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:01:16.421129    9274 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:01:16.426174    9274 out.go:177] * Starting "docker-flags-212000" primary control-plane node in "docker-flags-212000" cluster
	I0717 11:01:16.430231    9274 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:01:16.430248    9274 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:01:16.430257    9274 cache.go:56] Caching tarball of preloaded images
	I0717 11:01:16.430329    9274 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:01:16.430335    9274 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:01:16.430395    9274 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/docker-flags-212000/config.json ...
	I0717 11:01:16.430408    9274 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/docker-flags-212000/config.json: {Name:mk4d03784029e60600948bb0b56987593f302837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:01:16.430611    9274 start.go:360] acquireMachinesLock for docker-flags-212000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:16.430647    9274 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "docker-flags-212000"
	I0717 11:01:16.430657    9274 start.go:93] Provisioning new machine with config: &{Name:docker-flags-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:16.430683    9274 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:16.438259    9274 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:01:16.456112    9274 start.go:159] libmachine.API.Create for "docker-flags-212000" (driver="qemu2")
	I0717 11:01:16.456141    9274 client.go:168] LocalClient.Create starting
	I0717 11:01:16.456207    9274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:01:16.456239    9274 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:16.456249    9274 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:16.456290    9274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:01:16.456314    9274 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:16.456320    9274 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:16.456672    9274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:16.588642    9274 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:16.642908    9274 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:16.642914    9274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:16.643075    9274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2
	I0717 11:01:16.652387    9274 main.go:141] libmachine: STDOUT: 
	I0717 11:01:16.652403    9274 main.go:141] libmachine: STDERR: 
	I0717 11:01:16.652459    9274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2 +20000M
	I0717 11:01:16.660331    9274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:16.660345    9274 main.go:141] libmachine: STDERR: 
	I0717 11:01:16.660363    9274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2
	I0717 11:01:16.660369    9274 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:16.660383    9274 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:16.660412    9274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bf:96:0a:0b:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2
	I0717 11:01:16.662020    9274 main.go:141] libmachine: STDOUT: 
	I0717 11:01:16.662033    9274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:16.662054    9274 client.go:171] duration metric: took 205.914208ms to LocalClient.Create
	I0717 11:01:18.664174    9274 start.go:128] duration metric: took 2.233529083s to createHost
	I0717 11:01:18.664234    9274 start.go:83] releasing machines lock for "docker-flags-212000", held for 2.233632292s
	W0717 11:01:18.664310    9274 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:18.678606    9274 out.go:177] * Deleting "docker-flags-212000" in qemu2 ...
	W0717 11:01:18.702740    9274 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:18.702770    9274 start.go:729] Will try again in 5 seconds ...
	I0717 11:01:23.704857    9274 start.go:360] acquireMachinesLock for docker-flags-212000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:23.861228    9274 start.go:364] duration metric: took 156.209792ms to acquireMachinesLock for "docker-flags-212000"
	I0717 11:01:23.861804    9274 start.go:93] Provisioning new machine with config: &{Name:docker-flags-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:docker-flags-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:23.862167    9274 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:23.875776    9274 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:01:23.927410    9274 start.go:159] libmachine.API.Create for "docker-flags-212000" (driver="qemu2")
	I0717 11:01:23.927470    9274 client.go:168] LocalClient.Create starting
	I0717 11:01:23.927613    9274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:01:23.927678    9274 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:23.927697    9274 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:23.927767    9274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:01:23.927812    9274 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:23.927823    9274 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:23.928320    9274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:24.081524    9274 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:24.180519    9274 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:24.180529    9274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:24.180678    9274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2
	I0717 11:01:24.189815    9274 main.go:141] libmachine: STDOUT: 
	I0717 11:01:24.189835    9274 main.go:141] libmachine: STDERR: 
	I0717 11:01:24.189884    9274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2 +20000M
	I0717 11:01:24.197803    9274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:24.197817    9274 main.go:141] libmachine: STDERR: 
	I0717 11:01:24.197833    9274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2
	I0717 11:01:24.197837    9274 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:24.197845    9274 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:24.197872    9274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c5:7b:c3:d1:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/docker-flags-212000/disk.qcow2
	I0717 11:01:24.199492    9274 main.go:141] libmachine: STDOUT: 
	I0717 11:01:24.199508    9274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:24.199521    9274 client.go:171] duration metric: took 272.053042ms to LocalClient.Create
	I0717 11:01:26.201640    9274 start.go:128] duration metric: took 2.339441459s to createHost
	I0717 11:01:26.201700    9274 start.go:83] releasing machines lock for "docker-flags-212000", held for 2.340496375s
	W0717 11:01:26.202088    9274 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-212000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-212000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:26.217872    9274 out.go:177] 
	W0717 11:01:26.224933    9274 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:01:26.225008    9274 out.go:239] * 
	* 
	W0717 11:01:26.227901    9274 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:01:26.238681    9274 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-212000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (84.114292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-212000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-212000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-212000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-212000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-212000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-212000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-212000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (42.723667ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-212000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-212000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-212000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-212000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-212000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-212000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-17 11:01:26.383618 -0700 PDT m=+742.874321501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-212000 -n docker-flags-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-212000 -n docker-flags-212000: exit status 7 (28.379167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-212000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-212000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-212000
--- FAIL: TestDockerFlags (10.17s)

                                                
                                    
x
+
TestForceSystemdFlag (10.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-060000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-060000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.914651042s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-060000" primary control-plane node in "force-systemd-flag-060000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-060000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:01:11.356447    9247 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:01:11.356567    9247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:01:11.356571    9247 out.go:304] Setting ErrFile to fd 2...
	I0717 11:01:11.356573    9247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:01:11.356702    9247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:01:11.357751    9247 out.go:298] Setting JSON to false
	I0717 11:01:11.373545    9247 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5439,"bootTime":1721233832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:01:11.373622    9247 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:01:11.379712    9247 out.go:177] * [force-systemd-flag-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:01:11.386636    9247 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:01:11.386681    9247 notify.go:220] Checking for updates...
	I0717 11:01:11.392661    9247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:01:11.399657    9247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:01:11.406617    9247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:01:11.410601    9247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:01:11.413663    9247 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:01:11.417964    9247 config.go:182] Loaded profile config "force-systemd-env-812000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:01:11.418043    9247 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:01:11.418094    9247 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:01:11.421660    9247 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:01:11.426613    9247 start.go:297] selected driver: qemu2
	I0717 11:01:11.426619    9247 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:01:11.426625    9247 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:01:11.428846    9247 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:01:11.431666    9247 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:01:11.434713    9247 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 11:01:11.434740    9247 cni.go:84] Creating CNI manager for ""
	I0717 11:01:11.434747    9247 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:01:11.434754    9247 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:01:11.434795    9247 start.go:340] cluster config:
	{Name:force-systemd-flag-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:01:11.438415    9247 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:01:11.445651    9247 out.go:177] * Starting "force-systemd-flag-060000" primary control-plane node in "force-systemd-flag-060000" cluster
	I0717 11:01:11.449601    9247 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:01:11.449618    9247 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:01:11.449627    9247 cache.go:56] Caching tarball of preloaded images
	I0717 11:01:11.449704    9247 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:01:11.449720    9247 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:01:11.449777    9247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/force-systemd-flag-060000/config.json ...
	I0717 11:01:11.449790    9247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/force-systemd-flag-060000/config.json: {Name:mkfb5916cfff668a0a263b8495d620c375d35f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:01:11.450018    9247 start.go:360] acquireMachinesLock for force-systemd-flag-060000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:11.450055    9247 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "force-systemd-flag-060000"
	I0717 11:01:11.450067    9247 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:11.450111    9247 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:11.457562    9247 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:01:11.475532    9247 start.go:159] libmachine.API.Create for "force-systemd-flag-060000" (driver="qemu2")
	I0717 11:01:11.475560    9247 client.go:168] LocalClient.Create starting
	I0717 11:01:11.475636    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:01:11.475677    9247 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:11.475691    9247 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:11.475727    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:01:11.475750    9247 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:11.475757    9247 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:11.476177    9247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:11.607834    9247 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:11.788881    9247 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:11.788888    9247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:11.789069    9247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2
	I0717 11:01:11.798671    9247 main.go:141] libmachine: STDOUT: 
	I0717 11:01:11.798689    9247 main.go:141] libmachine: STDERR: 
	I0717 11:01:11.798749    9247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2 +20000M
	I0717 11:01:11.806523    9247 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:11.806548    9247 main.go:141] libmachine: STDERR: 
	I0717 11:01:11.806562    9247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2
	I0717 11:01:11.806567    9247 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:11.806577    9247 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:11.806607    9247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:89:94:2c:f7:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2
	I0717 11:01:11.808250    9247 main.go:141] libmachine: STDOUT: 
	I0717 11:01:11.808265    9247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:11.808282    9247 client.go:171] duration metric: took 332.727583ms to LocalClient.Create
	I0717 11:01:13.810460    9247 start.go:128] duration metric: took 2.360370791s to createHost
	I0717 11:01:13.810544    9247 start.go:83] releasing machines lock for "force-systemd-flag-060000", held for 2.360536708s
	W0717 11:01:13.810684    9247 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:13.833732    9247 out.go:177] * Deleting "force-systemd-flag-060000" in qemu2 ...
	W0717 11:01:13.852772    9247 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:13.852797    9247 start.go:729] Will try again in 5 seconds ...
	I0717 11:01:18.854918    9247 start.go:360] acquireMachinesLock for force-systemd-flag-060000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:18.855490    9247 start.go:364] duration metric: took 446.333µs to acquireMachinesLock for "force-systemd-flag-060000"
	I0717 11:01:18.855650    9247 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-flag-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:18.855911    9247 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:18.865321    9247 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:01:18.917432    9247 start.go:159] libmachine.API.Create for "force-systemd-flag-060000" (driver="qemu2")
	I0717 11:01:18.917480    9247 client.go:168] LocalClient.Create starting
	I0717 11:01:18.917597    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:01:18.917662    9247 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:18.917682    9247 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:18.917755    9247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:01:18.917799    9247 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:18.917813    9247 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:18.918304    9247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:19.065195    9247 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:19.182233    9247 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:19.182242    9247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:19.182404    9247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2
	I0717 11:01:19.191347    9247 main.go:141] libmachine: STDOUT: 
	I0717 11:01:19.191363    9247 main.go:141] libmachine: STDERR: 
	I0717 11:01:19.191404    9247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2 +20000M
	I0717 11:01:19.199232    9247 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:19.199249    9247 main.go:141] libmachine: STDERR: 
	I0717 11:01:19.199260    9247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2
	I0717 11:01:19.199265    9247 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:19.199274    9247 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:19.199300    9247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:27:a9:c7:67:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-flag-060000/disk.qcow2
	I0717 11:01:19.200874    9247 main.go:141] libmachine: STDOUT: 
	I0717 11:01:19.200888    9247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:19.200901    9247 client.go:171] duration metric: took 283.422333ms to LocalClient.Create
	I0717 11:01:21.203112    9247 start.go:128] duration metric: took 2.347217208s to createHost
	I0717 11:01:21.203176    9247 start.go:83] releasing machines lock for "force-systemd-flag-060000", held for 2.347703042s
	W0717 11:01:21.203601    9247 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:21.210780    9247 out.go:177] 
	W0717 11:01:21.216202    9247 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:01:21.216244    9247 out.go:239] * 
	* 
	W0717 11:01:21.218961    9247 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:01:21.229249    9247 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-060000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-060000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-060000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.587417ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-060000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-060000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-060000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-17 11:01:21.322245 -0700 PDT m=+737.812825292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-060000 -n force-systemd-flag-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-060000 -n force-systemd-flag-060000: exit status 7 (34.867792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-060000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-060000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-060000
--- FAIL: TestForceSystemdFlag (10.10s)

                                                
                                    
x
+
TestForceSystemdEnv (10.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-812000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-812000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.157216125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-812000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-812000" primary control-plane node in "force-systemd-env-812000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-812000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:01:06.005922    9204 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:01:06.006063    9204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:01:06.006067    9204 out.go:304] Setting ErrFile to fd 2...
	I0717 11:01:06.006069    9204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:01:06.006182    9204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:01:06.007160    9204 out.go:298] Setting JSON to false
	I0717 11:01:06.023646    9204 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5434,"bootTime":1721233832,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:01:06.023719    9204 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:01:06.028894    9204 out.go:177] * [force-systemd-env-812000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:01:06.036093    9204 notify.go:220] Checking for updates...
	I0717 11:01:06.040557    9204 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:01:06.048821    9204 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:01:06.056943    9204 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:01:06.063971    9204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:01:06.070921    9204 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:01:06.078928    9204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0717 11:01:06.083252    9204 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:01:06.083305    9204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:01:06.086939    9204 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:01:06.093976    9204 start.go:297] selected driver: qemu2
	I0717 11:01:06.093983    9204 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:01:06.093989    9204 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:01:06.096301    9204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:01:06.099977    9204 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:01:06.104046    9204 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 11:01:06.104077    9204 cni.go:84] Creating CNI manager for ""
	I0717 11:01:06.104084    9204 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:01:06.104092    9204 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:01:06.104123    9204 start.go:340] cluster config:
	{Name:force-systemd-env-812000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:01:06.107718    9204 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:01:06.114960    9204 out.go:177] * Starting "force-systemd-env-812000" primary control-plane node in "force-systemd-env-812000" cluster
	I0717 11:01:06.118937    9204 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:01:06.118953    9204 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:01:06.118962    9204 cache.go:56] Caching tarball of preloaded images
	I0717 11:01:06.119023    9204 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:01:06.119029    9204 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:01:06.119081    9204 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/force-systemd-env-812000/config.json ...
	I0717 11:01:06.119094    9204 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/force-systemd-env-812000/config.json: {Name:mkb07fb4bbf9ab8f6e6bc9af5b4bc4ef20db1747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:01:06.119365    9204 start.go:360] acquireMachinesLock for force-systemd-env-812000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:06.119402    9204 start.go:364] duration metric: took 31.084µs to acquireMachinesLock for "force-systemd-env-812000"
	I0717 11:01:06.119415    9204 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:06.119447    9204 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:06.126847    9204 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:01:06.144097    9204 start.go:159] libmachine.API.Create for "force-systemd-env-812000" (driver="qemu2")
	I0717 11:01:06.144127    9204 client.go:168] LocalClient.Create starting
	I0717 11:01:06.144193    9204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:01:06.144222    9204 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:06.144231    9204 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:06.144265    9204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:01:06.144287    9204 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:06.144295    9204 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:06.144682    9204 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:06.277727    9204 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:06.350512    9204 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:06.350524    9204 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:06.350715    9204 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2
	I0717 11:01:06.360019    9204 main.go:141] libmachine: STDOUT: 
	I0717 11:01:06.360042    9204 main.go:141] libmachine: STDERR: 
	I0717 11:01:06.360102    9204 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2 +20000M
	I0717 11:01:06.368473    9204 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:06.368496    9204 main.go:141] libmachine: STDERR: 
	I0717 11:01:06.368516    9204 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2
	I0717 11:01:06.368520    9204 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:06.368533    9204 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:06.368565    9204 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:74:92:47:25:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2
	I0717 11:01:06.370350    9204 main.go:141] libmachine: STDOUT: 
	I0717 11:01:06.370366    9204 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:06.370385    9204 client.go:171] duration metric: took 226.259584ms to LocalClient.Create
	I0717 11:01:08.372547    9204 start.go:128] duration metric: took 2.253124458s to createHost
	I0717 11:01:08.372604    9204 start.go:83] releasing machines lock for "force-systemd-env-812000", held for 2.253246459s
	W0717 11:01:08.372684    9204 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:08.378988    9204 out.go:177] * Deleting "force-systemd-env-812000" in qemu2 ...
	W0717 11:01:08.408418    9204 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:08.408444    9204 start.go:729] Will try again in 5 seconds ...
	I0717 11:01:13.410573    9204 start.go:360] acquireMachinesLock for force-systemd-env-812000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:01:13.810729    9204 start.go:364] duration metric: took 400.053625ms to acquireMachinesLock for "force-systemd-env-812000"
	I0717 11:01:13.810858    9204 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-812000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.2 ClusterName:force-systemd-env-812000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:01:13.811114    9204 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:01:13.824689    9204 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0717 11:01:13.874179    9204 start.go:159] libmachine.API.Create for "force-systemd-env-812000" (driver="qemu2")
	I0717 11:01:13.874231    9204 client.go:168] LocalClient.Create starting
	I0717 11:01:13.874359    9204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:01:13.874425    9204 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:13.874441    9204 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:13.874502    9204 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:01:13.874546    9204 main.go:141] libmachine: Decoding PEM data...
	I0717 11:01:13.874563    9204 main.go:141] libmachine: Parsing certificate...
	I0717 11:01:13.875058    9204 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:01:14.030098    9204 main.go:141] libmachine: Creating SSH key...
	I0717 11:01:14.071226    9204 main.go:141] libmachine: Creating Disk image...
	I0717 11:01:14.071231    9204 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:01:14.071395    9204 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2
	I0717 11:01:14.080834    9204 main.go:141] libmachine: STDOUT: 
	I0717 11:01:14.080853    9204 main.go:141] libmachine: STDERR: 
	I0717 11:01:14.080914    9204 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2 +20000M
	I0717 11:01:14.088735    9204 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:01:14.088749    9204 main.go:141] libmachine: STDERR: 
	I0717 11:01:14.088761    9204 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2
	I0717 11:01:14.088764    9204 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:01:14.088780    9204 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:01:14.088805    9204 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:fd:cd:f6:22:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/force-systemd-env-812000/disk.qcow2
	I0717 11:01:14.090418    9204 main.go:141] libmachine: STDOUT: 
	I0717 11:01:14.090433    9204 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:01:14.090447    9204 client.go:171] duration metric: took 216.215208ms to LocalClient.Create
	I0717 11:01:16.092685    9204 start.go:128] duration metric: took 2.281564334s to createHost
	I0717 11:01:16.092767    9204 start.go:83] releasing machines lock for "force-systemd-env-812000", held for 2.28203475s
	W0717 11:01:16.093132    9204 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-812000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:01:16.103599    9204 out.go:177] 
	W0717 11:01:16.107757    9204 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:01:16.107785    9204 out.go:239] * 
	* 
	W0717 11:01:16.110212    9204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:01:16.119522    9204 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-812000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-812000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-812000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (72.766584ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-812000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-812000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-812000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-17 11:01:16.209776 -0700 PDT m=+732.700230709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-812000 -n force-systemd-env-812000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-812000 -n force-systemd-env-812000: exit status 7 (33.145333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-812000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-812000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-812000
--- FAIL: TestForceSystemdEnv (10.34s)

                                                
                                    
x
+
TestErrorSpam/setup (9.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-854000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-854000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 --driver=qemu2 : exit status 80 (9.841211709s)

                                                
                                                
-- stdout --
	* [nospam-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-854000" primary control-plane node in "nospam-854000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-854000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-854000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-854000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19283
- KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-854000" primary control-plane node in "nospam-854000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-854000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.84s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-928000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-928000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.921681375s)

                                                
                                                
-- stdout --
	* [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-928000" primary control-plane node in "functional-928000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-928000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51091 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51091 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51091 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-928000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-928000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19283
- KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-928000" primary control-plane node in "functional-928000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-928000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51091 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51091 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51091 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-928000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (68.51125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.99s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-928000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-928000 --alsologtostderr -v=8: exit status 80 (5.185721708s)

                                                
                                                
-- stdout --
	* [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-928000" primary control-plane node in "functional-928000" cluster
	* Restarting existing qemu2 VM for "functional-928000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-928000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:50:09.988473    7619 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:50:09.988601    7619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:50:09.988605    7619 out.go:304] Setting ErrFile to fd 2...
	I0717 10:50:09.988607    7619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:50:09.988728    7619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:50:09.989737    7619 out.go:298] Setting JSON to false
	I0717 10:50:10.005839    7619 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4777,"bootTime":1721233832,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:50:10.005910    7619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:50:10.011402    7619 out.go:177] * [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:50:10.018344    7619 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:50:10.018406    7619 notify.go:220] Checking for updates...
	I0717 10:50:10.025275    7619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:50:10.028413    7619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:50:10.031329    7619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:50:10.032722    7619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:50:10.035289    7619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:50:10.038648    7619 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:50:10.038708    7619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:50:10.043163    7619 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:50:10.050314    7619 start.go:297] selected driver: qemu2
	I0717 10:50:10.050323    7619 start.go:901] validating driver "qemu2" against &{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:50:10.050390    7619 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:50:10.052691    7619 cni.go:84] Creating CNI manager for ""
	I0717 10:50:10.052707    7619 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:50:10.052748    7619 start.go:340] cluster config:
	{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:50:10.056075    7619 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:50:10.063287    7619 out.go:177] * Starting "functional-928000" primary control-plane node in "functional-928000" cluster
	I0717 10:50:10.067326    7619 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:50:10.067345    7619 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:50:10.067362    7619 cache.go:56] Caching tarball of preloaded images
	I0717 10:50:10.067434    7619 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:50:10.067440    7619 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:50:10.067493    7619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/functional-928000/config.json ...
	I0717 10:50:10.067925    7619 start.go:360] acquireMachinesLock for functional-928000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:50:10.067953    7619 start.go:364] duration metric: took 22.333µs to acquireMachinesLock for "functional-928000"
	I0717 10:50:10.067962    7619 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:50:10.067967    7619 fix.go:54] fixHost starting: 
	I0717 10:50:10.068089    7619 fix.go:112] recreateIfNeeded on functional-928000: state=Stopped err=<nil>
	W0717 10:50:10.068098    7619 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:50:10.076360    7619 out.go:177] * Restarting existing qemu2 VM for "functional-928000" ...
	I0717 10:50:10.080277    7619 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:50:10.080312    7619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:60:b1:a8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/disk.qcow2
	I0717 10:50:10.082351    7619 main.go:141] libmachine: STDOUT: 
	I0717 10:50:10.082371    7619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:50:10.082399    7619 fix.go:56] duration metric: took 14.431292ms for fixHost
	I0717 10:50:10.082403    7619 start.go:83] releasing machines lock for "functional-928000", held for 14.446083ms
	W0717 10:50:10.082410    7619 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:50:10.082441    7619 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:50:10.082446    7619 start.go:729] Will try again in 5 seconds ...
	I0717 10:50:15.084556    7619 start.go:360] acquireMachinesLock for functional-928000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:50:15.085058    7619 start.go:364] duration metric: took 352.917µs to acquireMachinesLock for "functional-928000"
	I0717 10:50:15.085244    7619 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:50:15.085266    7619 fix.go:54] fixHost starting: 
	I0717 10:50:15.086036    7619 fix.go:112] recreateIfNeeded on functional-928000: state=Stopped err=<nil>
	W0717 10:50:15.086067    7619 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:50:15.091575    7619 out.go:177] * Restarting existing qemu2 VM for "functional-928000" ...
	I0717 10:50:15.098553    7619 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:50:15.098898    7619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:60:b1:a8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/disk.qcow2
	I0717 10:50:15.108278    7619 main.go:141] libmachine: STDOUT: 
	I0717 10:50:15.108355    7619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:50:15.108421    7619 fix.go:56] duration metric: took 23.157209ms for fixHost
	I0717 10:50:15.108437    7619 start.go:83] releasing machines lock for "functional-928000", held for 23.32875ms
	W0717 10:50:15.108613    7619 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-928000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-928000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:50:15.115531    7619 out.go:177] 
	W0717 10:50:15.119443    7619 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:50:15.119465    7619 out.go:239] * 
	* 
	W0717 10:50:15.122025    7619 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:50:15.130485    7619 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-928000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.187463084s for "functional-928000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (70.605875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.640458ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-928000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (30.514459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-928000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-928000 get po -A: exit status 1 (26.423625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-928000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-928000\n"*: args "kubectl --context functional-928000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-928000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (30.25975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh sudo crictl images: exit status 83 (40.731417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-928000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (38.756792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-928000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.852541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.00125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-928000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 kubectl -- --context functional-928000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 kubectl -- --context functional-928000 get pods: exit status 1 (700.720416ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-928000
	* no server found for cluster "functional-928000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-928000 kubectl -- --context functional-928000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (31.690833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-928000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-928000 get pods: exit status 1 (950.770042ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-928000
	* no server found for cluster "functional-928000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-928000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (28.766667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-928000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-928000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.228059541s)

                                                
                                                
-- stdout --
	* [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-928000" primary control-plane node in "functional-928000" cluster
	* Restarting existing qemu2 VM for "functional-928000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-928000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-928000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-928000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.228564583s for "functional-928000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (67.809709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-928000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-928000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.993666ms)

                                                
                                                
** stderr ** 
	error: context "functional-928000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-928000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (29.423417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 logs: exit status 83 (79.639625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | -p download-only-580000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| start   | -o=json --download-only                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | -p download-only-478000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-478000                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| start   | -o=json --download-only                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | -p download-only-012000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-012000                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-478000                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-012000                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| start   | --download-only -p                                                       | binary-mirror-738000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | binary-mirror-738000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51055                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-738000                                                  | binary-mirror-738000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| addons  | enable dashboard -p                                                      | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | addons-914000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | addons-914000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-914000 --wait=true                                             | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-914000                                                         | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| start   | -p nospam-854000 -n=1 --memory=2250 --wait=false                         | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-854000                                                         | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	|         | minikube-local-cache-test:functional-928000                              |                      |         |         |                     |                     |
	| cache   | functional-928000 cache delete                                           | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	|         | minikube-local-cache-test:functional-928000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	| ssh     | functional-928000 ssh sudo                                               | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-928000                                                        | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-928000 ssh                                                    | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-928000 cache reload                                           | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	| ssh     | functional-928000 ssh                                                    | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-928000 kubectl --                                             | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
	|         | --context functional-928000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:50:20
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:50:20.292463    7697 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:50:20.292584    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:50:20.292587    7697 out.go:304] Setting ErrFile to fd 2...
	I0717 10:50:20.292588    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:50:20.292720    7697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:50:20.293790    7697 out.go:298] Setting JSON to false
	I0717 10:50:20.309645    7697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4788,"bootTime":1721233832,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:50:20.309702    7697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:50:20.314734    7697 out.go:177] * [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:50:20.326571    7697 notify.go:220] Checking for updates...
	I0717 10:50:20.331539    7697 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:50:20.341502    7697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:50:20.351597    7697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:50:20.358543    7697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:50:20.370570    7697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:50:20.377586    7697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:50:20.383794    7697 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:50:20.383864    7697 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:50:20.388510    7697 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:50:20.397538    7697 start.go:297] selected driver: qemu2
	I0717 10:50:20.397542    7697 start.go:901] validating driver "qemu2" against &{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:50:20.397587    7697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:50:20.400171    7697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:50:20.400196    7697 cni.go:84] Creating CNI manager for ""
	I0717 10:50:20.400202    7697 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:50:20.400245    7697 start.go:340] cluster config:
	{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:50:20.404277    7697 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:50:20.412539    7697 out.go:177] * Starting "functional-928000" primary control-plane node in "functional-928000" cluster
	I0717 10:50:20.416538    7697 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:50:20.416551    7697 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:50:20.416560    7697 cache.go:56] Caching tarball of preloaded images
	I0717 10:50:20.416623    7697 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:50:20.416627    7697 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:50:20.416696    7697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/functional-928000/config.json ...
	I0717 10:50:20.417165    7697 start.go:360] acquireMachinesLock for functional-928000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:50:20.417204    7697 start.go:364] duration metric: took 34µs to acquireMachinesLock for "functional-928000"
	I0717 10:50:20.417211    7697 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:50:20.417216    7697 fix.go:54] fixHost starting: 
	I0717 10:50:20.417353    7697 fix.go:112] recreateIfNeeded on functional-928000: state=Stopped err=<nil>
	W0717 10:50:20.417362    7697 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:50:20.425514    7697 out.go:177] * Restarting existing qemu2 VM for "functional-928000" ...
	I0717 10:50:20.429524    7697 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:50:20.429563    7697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:60:b1:a8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/disk.qcow2
	I0717 10:50:20.431931    7697 main.go:141] libmachine: STDOUT: 
	I0717 10:50:20.431948    7697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:50:20.431986    7697 fix.go:56] duration metric: took 14.76975ms for fixHost
	I0717 10:50:20.431989    7697 start.go:83] releasing machines lock for "functional-928000", held for 14.782375ms
	W0717 10:50:20.431996    7697 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:50:20.432040    7697 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:50:20.432046    7697 start.go:729] Will try again in 5 seconds ...
	I0717 10:50:25.434114    7697 start.go:360] acquireMachinesLock for functional-928000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:50:25.434572    7697 start.go:364] duration metric: took 369.292µs to acquireMachinesLock for "functional-928000"
	I0717 10:50:25.434695    7697 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:50:25.434709    7697 fix.go:54] fixHost starting: 
	I0717 10:50:25.435427    7697 fix.go:112] recreateIfNeeded on functional-928000: state=Stopped err=<nil>
	W0717 10:50:25.435449    7697 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:50:25.440012    7697 out.go:177] * Restarting existing qemu2 VM for "functional-928000" ...
	I0717 10:50:25.447986    7697 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:50:25.448246    7697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:60:b1:a8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/disk.qcow2
	I0717 10:50:25.457712    7697 main.go:141] libmachine: STDOUT: 
	I0717 10:50:25.457778    7697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:50:25.457887    7697 fix.go:56] duration metric: took 23.181166ms for fixHost
	I0717 10:50:25.457900    7697 start.go:83] releasing machines lock for "functional-928000", held for 23.314916ms
	W0717 10:50:25.458080    7697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-928000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:50:25.464005    7697 out.go:177] 
	W0717 10:50:25.468043    7697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:50:25.468078    7697 out.go:239] * 
	W0717 10:50:25.470914    7697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:50:25.477743    7697 out.go:177] 
	
	
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-928000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | -p download-only-580000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | -o=json --download-only                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | -p download-only-478000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-478000                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | -o=json --download-only                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | -p download-only-012000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-012000                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-478000                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-012000                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | --download-only -p                                                       | binary-mirror-738000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | binary-mirror-738000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51055                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-738000                                                  | binary-mirror-738000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| addons  | enable dashboard -p                                                      | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | addons-914000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | addons-914000                                                            |                      |         |         |                     |                     |
| start   | -p addons-914000 --wait=true                                             | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-914000                                                         | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | -p nospam-854000 -n=1 --memory=2250 --wait=false                         | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-854000                                                         | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | minikube-local-cache-test:functional-928000                              |                      |         |         |                     |                     |
| cache   | functional-928000 cache delete                                           | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | minikube-local-cache-test:functional-928000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
| ssh     | functional-928000 ssh sudo                                               | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-928000                                                        | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-928000 ssh                                                    | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-928000 cache reload                                           | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
| ssh     | functional-928000 ssh                                                    | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-928000 kubectl --                                             | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | --context functional-928000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/17 10:50:20
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 10:50:20.292463    7697 out.go:291] Setting OutFile to fd 1 ...
I0717 10:50:20.292584    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:50:20.292587    7697 out.go:304] Setting ErrFile to fd 2...
I0717 10:50:20.292588    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:50:20.292720    7697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:50:20.293790    7697 out.go:298] Setting JSON to false
I0717 10:50:20.309645    7697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4788,"bootTime":1721233832,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0717 10:50:20.309702    7697 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0717 10:50:20.314734    7697 out.go:177] * [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0717 10:50:20.326571    7697 notify.go:220] Checking for updates...
I0717 10:50:20.331539    7697 out.go:177]   - MINIKUBE_LOCATION=19283
I0717 10:50:20.341502    7697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
I0717 10:50:20.351597    7697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0717 10:50:20.358543    7697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 10:50:20.370570    7697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
I0717 10:50:20.377586    7697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0717 10:50:20.383794    7697 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:50:20.383864    7697 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 10:50:20.388510    7697 out.go:177] * Using the qemu2 driver based on existing profile
I0717 10:50:20.397538    7697 start.go:297] selected driver: qemu2
I0717 10:50:20.397542    7697 start.go:901] validating driver "qemu2" against &{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:50:20.397587    7697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 10:50:20.400171    7697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 10:50:20.400196    7697 cni.go:84] Creating CNI manager for ""
I0717 10:50:20.400202    7697 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0717 10:50:20.400245    7697 start.go:340] cluster config:
{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:50:20.404277    7697 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 10:50:20.412539    7697 out.go:177] * Starting "functional-928000" primary control-plane node in "functional-928000" cluster
I0717 10:50:20.416538    7697 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:50:20.416551    7697 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0717 10:50:20.416560    7697 cache.go:56] Caching tarball of preloaded images
I0717 10:50:20.416623    7697 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0717 10:50:20.416627    7697 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0717 10:50:20.416696    7697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/functional-928000/config.json ...
I0717 10:50:20.417165    7697 start.go:360] acquireMachinesLock for functional-928000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:50:20.417204    7697 start.go:364] duration metric: took 34µs to acquireMachinesLock for "functional-928000"
I0717 10:50:20.417211    7697 start.go:96] Skipping create...Using existing machine configuration
I0717 10:50:20.417216    7697 fix.go:54] fixHost starting: 
I0717 10:50:20.417353    7697 fix.go:112] recreateIfNeeded on functional-928000: state=Stopped err=<nil>
W0717 10:50:20.417362    7697 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:50:20.425514    7697 out.go:177] * Restarting existing qemu2 VM for "functional-928000" ...
I0717 10:50:20.429524    7697 qemu.go:418] Using hvf for hardware acceleration
I0717 10:50:20.429563    7697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:60:b1:a8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/disk.qcow2
I0717 10:50:20.431931    7697 main.go:141] libmachine: STDOUT: 
I0717 10:50:20.431948    7697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0717 10:50:20.431986    7697 fix.go:56] duration metric: took 14.76975ms for fixHost
I0717 10:50:20.431989    7697 start.go:83] releasing machines lock for "functional-928000", held for 14.782375ms
W0717 10:50:20.431996    7697 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0717 10:50:20.432040    7697 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0717 10:50:20.432046    7697 start.go:729] Will try again in 5 seconds ...
I0717 10:50:25.434114    7697 start.go:360] acquireMachinesLock for functional-928000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:50:25.434572    7697 start.go:364] duration metric: took 369.292µs to acquireMachinesLock for "functional-928000"
I0717 10:50:25.434695    7697 start.go:96] Skipping create...Using existing machine configuration
I0717 10:50:25.434709    7697 fix.go:54] fixHost starting: 
I0717 10:50:25.435427    7697 fix.go:112] recreateIfNeeded on functional-928000: state=Stopped err=<nil>
W0717 10:50:25.435449    7697 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:50:25.440012    7697 out.go:177] * Restarting existing qemu2 VM for "functional-928000" ...
I0717 10:50:25.447986    7697 qemu.go:418] Using hvf for hardware acceleration
I0717 10:50:25.448246    7697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:60:b1:a8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/disk.qcow2
I0717 10:50:25.457712    7697 main.go:141] libmachine: STDOUT: 
I0717 10:50:25.457778    7697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0717 10:50:25.457887    7697 fix.go:56] duration metric: took 23.181166ms for fixHost
I0717 10:50:25.457900    7697 start.go:83] releasing machines lock for "functional-928000", held for 23.314916ms
W0717 10:50:25.458080    7697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-928000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0717 10:50:25.464005    7697 out.go:177] 
W0717 10:50:25.468043    7697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0717 10:50:25.468078    7697 out.go:239] * 
W0717 10:50:25.470914    7697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:50:25.477743    7697 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2161727215/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | -p download-only-580000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | -o=json --download-only                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | -p download-only-478000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-478000                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | -o=json --download-only                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | -p download-only-012000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-012000                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-580000                                                  | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-478000                                                  | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| delete  | -p download-only-012000                                                  | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | --download-only -p                                                       | binary-mirror-738000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | binary-mirror-738000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51055                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-738000                                                  | binary-mirror-738000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| addons  | enable dashboard -p                                                      | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | addons-914000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | addons-914000                                                            |                      |         |         |                     |                     |
| start   | -p addons-914000 --wait=true                                             | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-914000                                                         | addons-914000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | -p nospam-854000 -n=1 --memory=2250 --wait=false                         | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-854000 --log_dir                                                  | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-854000                                                         | nospam-854000        | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-928000 cache add                                              | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | minikube-local-cache-test:functional-928000                              |                      |         |         |                     |                     |
| cache   | functional-928000 cache delete                                           | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | minikube-local-cache-test:functional-928000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
| ssh     | functional-928000 ssh sudo                                               | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-928000                                                        | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-928000 ssh                                                    | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-928000 cache reload                                           | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
| ssh     | functional-928000 ssh                                                    | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT | 17 Jul 24 10:50 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-928000 kubectl --                                             | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | --context functional-928000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-928000                                                     | functional-928000    | jenkins | v1.33.1 | 17 Jul 24 10:50 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/17 10:50:20
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 10:50:20.292463    7697 out.go:291] Setting OutFile to fd 1 ...
I0717 10:50:20.292584    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:50:20.292587    7697 out.go:304] Setting ErrFile to fd 2...
I0717 10:50:20.292588    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:50:20.292720    7697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:50:20.293790    7697 out.go:298] Setting JSON to false
I0717 10:50:20.309645    7697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4788,"bootTime":1721233832,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0717 10:50:20.309702    7697 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0717 10:50:20.314734    7697 out.go:177] * [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0717 10:50:20.326571    7697 notify.go:220] Checking for updates...
I0717 10:50:20.331539    7697 out.go:177]   - MINIKUBE_LOCATION=19283
I0717 10:50:20.341502    7697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
I0717 10:50:20.351597    7697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0717 10:50:20.358543    7697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 10:50:20.370570    7697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
I0717 10:50:20.377586    7697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0717 10:50:20.383794    7697 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:50:20.383864    7697 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 10:50:20.388510    7697 out.go:177] * Using the qemu2 driver based on existing profile
I0717 10:50:20.397538    7697 start.go:297] selected driver: qemu2
I0717 10:50:20.397542    7697 start.go:901] validating driver "qemu2" against &{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:50:20.397587    7697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 10:50:20.400171    7697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 10:50:20.400196    7697 cni.go:84] Creating CNI manager for ""
I0717 10:50:20.400202    7697 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0717 10:50:20.400245    7697 start.go:340] cluster config:
{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:50:20.404277    7697 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 10:50:20.412539    7697 out.go:177] * Starting "functional-928000" primary control-plane node in "functional-928000" cluster
I0717 10:50:20.416538    7697 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:50:20.416551    7697 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
I0717 10:50:20.416560    7697 cache.go:56] Caching tarball of preloaded images
I0717 10:50:20.416623    7697 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0717 10:50:20.416627    7697 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0717 10:50:20.416696    7697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/functional-928000/config.json ...
I0717 10:50:20.417165    7697 start.go:360] acquireMachinesLock for functional-928000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:50:20.417204    7697 start.go:364] duration metric: took 34µs to acquireMachinesLock for "functional-928000"
I0717 10:50:20.417211    7697 start.go:96] Skipping create...Using existing machine configuration
I0717 10:50:20.417216    7697 fix.go:54] fixHost starting: 
I0717 10:50:20.417353    7697 fix.go:112] recreateIfNeeded on functional-928000: state=Stopped err=<nil>
W0717 10:50:20.417362    7697 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:50:20.425514    7697 out.go:177] * Restarting existing qemu2 VM for "functional-928000" ...
I0717 10:50:20.429524    7697 qemu.go:418] Using hvf for hardware acceleration
I0717 10:50:20.429563    7697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:60:b1:a8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/disk.qcow2
I0717 10:50:20.431931    7697 main.go:141] libmachine: STDOUT: 
I0717 10:50:20.431948    7697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0717 10:50:20.431986    7697 fix.go:56] duration metric: took 14.76975ms for fixHost
I0717 10:50:20.431989    7697 start.go:83] releasing machines lock for "functional-928000", held for 14.782375ms
W0717 10:50:20.431996    7697 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0717 10:50:20.432040    7697 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0717 10:50:20.432046    7697 start.go:729] Will try again in 5 seconds ...
I0717 10:50:25.434114    7697 start.go:360] acquireMachinesLock for functional-928000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:50:25.434572    7697 start.go:364] duration metric: took 369.292µs to acquireMachinesLock for "functional-928000"
I0717 10:50:25.434695    7697 start.go:96] Skipping create...Using existing machine configuration
I0717 10:50:25.434709    7697 fix.go:54] fixHost starting: 
I0717 10:50:25.435427    7697 fix.go:112] recreateIfNeeded on functional-928000: state=Stopped err=<nil>
W0717 10:50:25.435449    7697 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:50:25.440012    7697 out.go:177] * Restarting existing qemu2 VM for "functional-928000" ...
I0717 10:50:25.447986    7697 qemu.go:418] Using hvf for hardware acceleration
I0717 10:50:25.448246    7697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:02:60:b1:a8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/functional-928000/disk.qcow2
I0717 10:50:25.457712    7697 main.go:141] libmachine: STDOUT: 
I0717 10:50:25.457778    7697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0717 10:50:25.457887    7697 fix.go:56] duration metric: took 23.181166ms for fixHost
I0717 10:50:25.457900    7697 start.go:83] releasing machines lock for "functional-928000", held for 23.314916ms
W0717 10:50:25.458080    7697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-928000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0717 10:50:25.464005    7697 out.go:177] 
W0717 10:50:25.468043    7697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0717 10:50:25.468078    7697 out.go:239] * 
W0717 10:50:25.470914    7697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:50:25.477743    7697 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-928000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-928000 apply -f testdata/invalidsvc.yaml: exit status 1 (26.416625ms)

                                                
                                                
** stderr ** 
	error: context "functional-928000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-928000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-928000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-928000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-928000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-928000 --alsologtostderr -v=1] stderr:
I0717 10:51:00.642656    7907 out.go:291] Setting OutFile to fd 1 ...
I0717 10:51:00.643041    7907 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:00.643045    7907 out.go:304] Setting ErrFile to fd 2...
I0717 10:51:00.643047    7907 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:00.643198    7907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:51:00.643399    7907 mustload.go:65] Loading cluster: functional-928000
I0717 10:51:00.643594    7907 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:51:00.647400    7907 out.go:177] * The control-plane node functional-928000 host is not running: state=Stopped
I0717 10:51:00.651361    7907 out.go:177]   To start a cluster, run: "minikube start -p functional-928000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (42.718625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 status: exit status 7 (74.72625ms)

                                                
                                                
-- stdout --
	functional-928000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-928000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.218791ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-928000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 status -o json: exit status 7 (29.904542ms)

                                                
                                                
-- stdout --
	{"Name":"functional-928000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-928000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (29.151833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-928000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-928000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.287708ms)

                                                
                                                
** stderr ** 
	error: context "functional-928000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-928000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-928000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-928000 describe po hello-node-connect: exit status 1 (26.143125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-928000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-928000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-928000 logs -l app=hello-node-connect: exit status 1 (26.506458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-928000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-928000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-928000 describe svc hello-node-connect: exit status 1 (26.767875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-928000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (30.115958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-928000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (34.368875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "echo hello": exit status 83 (57.72775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"\n"*. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "cat /etc/hostname": exit status 83 (51.725834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-928000"- but got *"* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"\n"*. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (29.333875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (55.409125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-928000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.722667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-928000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-928000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cp functional-928000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2061743217/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 cp functional-928000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2061743217/001/cp-test.txt: exit status 83 (47.013583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-928000 cp functional-928000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2061743217/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.583625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2061743217/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (44.980625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-928000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (52.81725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-928000 ssh -n functional-928000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-928000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-928000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7336/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/test/nested/copy/7336/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/test/nested/copy/7336/hosts": exit status 83 (41.888625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/test/nested/copy/7336/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-928000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-928000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (35.97275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7336.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/ssl/certs/7336.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/ssl/certs/7336.pem": exit status 83 (42.528791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7336.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"sudo cat /etc/ssl/certs/7336.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7336.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-928000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-928000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7336.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /usr/share/ca-certificates/7336.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /usr/share/ca-certificates/7336.pem": exit status 83 (41.378833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7336.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"sudo cat /usr/share/ca-certificates/7336.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7336.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-928000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-928000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (40.675333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-928000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-928000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/73362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/ssl/certs/73362.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/ssl/certs/73362.pem": exit status 83 (41.49075ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/73362.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"sudo cat /etc/ssl/certs/73362.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/73362.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-928000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-928000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/73362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /usr/share/ca-certificates/73362.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /usr/share/ca-certificates/73362.pem": exit status 83 (43.745042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/73362.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"sudo cat /usr/share/ca-certificates/73362.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/73362.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-928000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-928000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (40.535875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-928000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-928000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-928000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (29.050375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-928000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-928000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.023375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-928000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-928000 -n functional-928000: exit status 7 (28.734541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-928000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo systemctl is-active crio": exit status 83 (40.22425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0717 10:50:26.129173    7745 out.go:291] Setting OutFile to fd 1 ...
I0717 10:50:26.131161    7745 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:50:26.131165    7745 out.go:304] Setting ErrFile to fd 2...
I0717 10:50:26.131167    7745 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:50:26.131303    7745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:50:26.135063    7745 mustload.go:65] Loading cluster: functional-928000
I0717 10:50:26.135250    7745 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:50:26.141776    7745 out.go:177] * The control-plane node functional-928000 host is not running: state=Stopped
I0717 10:50:26.148861    7745 out.go:177]   To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
stdout: * The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7744: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-928000": client config: context "functional-928000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (106.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-928000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-928000 get svc nginx-svc: exit status 1 (72.027833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-928000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-928000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (106.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-928000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-928000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.1085ms)

                                                
                                                
** stderr ** 
	error: context "functional-928000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-928000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 service list: exit status 83 (42.170125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-928000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 service list -o json: exit status 83 (40.919625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-928000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 service --namespace=default --https --url hello-node: exit status 83 (40.688292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-928000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 service hello-node --url --format={{.IP}}: exit status 83 (40.958041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-928000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 service hello-node --url: exit status 83 (41.902ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-928000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:1565: failed to parse "* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"": parse "* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 version -o=json --components: exit status 83 (40.857042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-928000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-928000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-928000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-928000 image ls --format short --alsologtostderr:
I0717 10:51:05.520444    8032 out.go:291] Setting OutFile to fd 1 ...
I0717 10:51:05.520601    8032 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.520605    8032 out.go:304] Setting ErrFile to fd 2...
I0717 10:51:05.520607    8032 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.520725    8032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:51:05.521123    8032 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:51:05.521183    8032 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-928000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-928000 image ls --format table --alsologtostderr:
I0717 10:51:05.734874    8044 out.go:291] Setting OutFile to fd 1 ...
I0717 10:51:05.735013    8044 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.735020    8044 out.go:304] Setting ErrFile to fd 2...
I0717 10:51:05.735022    8044 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.735173    8044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:51:05.735583    8044 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:51:05.735643    8044 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-928000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-928000 image ls --format json --alsologtostderr:
I0717 10:51:05.700274    8042 out.go:291] Setting OutFile to fd 1 ...
I0717 10:51:05.700415    8042 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.700418    8042 out.go:304] Setting ErrFile to fd 2...
I0717 10:51:05.700420    8042 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.700544    8042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:51:05.700941    8042 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:51:05.700999    8042 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-928000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-928000 image ls --format yaml --alsologtostderr:
I0717 10:51:05.555785    8034 out.go:291] Setting OutFile to fd 1 ...
I0717 10:51:05.555935    8034 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.555938    8034 out.go:304] Setting ErrFile to fd 2...
I0717 10:51:05.555940    8034 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.556063    8034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:51:05.556431    8034 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:51:05.556494    8034 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh pgrep buildkitd: exit status 83 (40.303791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image build -t localhost/my-image:functional-928000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-928000 image build -t localhost/my-image:functional-928000 testdata/build --alsologtostderr:
I0717 10:51:05.630746    8038 out.go:291] Setting OutFile to fd 1 ...
I0717 10:51:05.631083    8038 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.631086    8038 out.go:304] Setting ErrFile to fd 2...
I0717 10:51:05.631088    8038 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:51:05.631209    8038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:51:05.631568    8038 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:51:05.632055    8038 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:51:05.632276    8038 build_images.go:133] succeeded building to: 
I0717 10:51:05.632280    8038 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls
functional_test.go:442: expected "localhost/my-image:functional-928000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image load --daemon docker.io/kicbase/echo-server:functional-928000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-928000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image load --daemon docker.io/kicbase/echo-server:functional-928000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-928000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-928000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image load --daemon docker.io/kicbase/echo-server:functional-928000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-928000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image save docker.io/kicbase/echo-server:functional-928000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-928000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-928000 docker-env) && out/minikube-darwin-arm64 status -p functional-928000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-928000 docker-env) && out/minikube-darwin-arm64 status -p functional-928000": exit status 1 (44.282875ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2: exit status 83 (42.762708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:51:05.769967    8046 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:51:05.770947    8046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:05.770950    8046 out.go:304] Setting ErrFile to fd 2...
	I0717 10:51:05.770953    8046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:05.771149    8046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:51:05.771351    8046 mustload.go:65] Loading cluster: functional-928000
	I0717 10:51:05.771538    8046 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:51:05.776308    8046 out.go:177] * The control-plane node functional-928000 host is not running: state=Stopped
	I0717 10:51:05.780314    8046 out.go:177]   To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2: exit status 83 (42.563709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:51:05.860661    8050 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:51:05.860808    8050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:05.860811    8050 out.go:304] Setting ErrFile to fd 2...
	I0717 10:51:05.860814    8050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:05.860940    8050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:51:05.861151    8050 mustload.go:65] Loading cluster: functional-928000
	I0717 10:51:05.861349    8050 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:51:05.866335    8050 out.go:177] * The control-plane node functional-928000 host is not running: state=Stopped
	I0717 10:51:05.870135    8050 out.go:177]   To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2: exit status 83 (46.649584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:51:05.813639    8048 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:51:05.813780    8048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:05.813783    8048 out.go:304] Setting ErrFile to fd 2...
	I0717 10:51:05.813785    8048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:05.813927    8048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:51:05.814150    8048 mustload.go:65] Loading cluster: functional-928000
	I0717 10:51:05.814351    8048 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:51:05.819312    8048 out.go:177] * The control-plane node functional-928000 host is not running: state=Stopped
	I0717 10:51:05.827260    8048 out.go:177]   To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-928000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-928000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-928000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.027081667s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 17 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (26.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (26.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-008000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-008000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.742984291s)

                                                
                                                
-- stdout --
	* [ha-008000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-008000" primary control-plane node in "ha-008000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-008000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:53:05.207197    8143 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:53:05.207327    8143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:05.207331    8143 out.go:304] Setting ErrFile to fd 2...
	I0717 10:53:05.207334    8143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:53:05.207690    8143 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:53:05.209046    8143 out.go:298] Setting JSON to false
	I0717 10:53:05.225305    8143 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4953,"bootTime":1721233832,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:53:05.225371    8143 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:53:05.230313    8143 out.go:177] * [ha-008000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:53:05.238253    8143 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:53:05.238295    8143 notify.go:220] Checking for updates...
	I0717 10:53:05.245208    8143 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:53:05.248192    8143 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:53:05.251203    8143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:53:05.254157    8143 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:53:05.257240    8143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:53:05.260395    8143 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:53:05.263065    8143 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 10:53:05.270209    8143 start.go:297] selected driver: qemu2
	I0717 10:53:05.270217    8143 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:53:05.270225    8143 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:53:05.272618    8143 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:53:05.273945    8143 out.go:177] * Automatically selected the socket_vmnet network
	I0717 10:53:05.277204    8143 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:53:05.277234    8143 cni.go:84] Creating CNI manager for ""
	I0717 10:53:05.277239    8143 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 10:53:05.277242    8143 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 10:53:05.277274    8143 start.go:340] cluster config:
	{Name:ha-008000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:53:05.281111    8143 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:53:05.288201    8143 out.go:177] * Starting "ha-008000" primary control-plane node in "ha-008000" cluster
	I0717 10:53:05.292164    8143 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:53:05.292182    8143 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:53:05.292193    8143 cache.go:56] Caching tarball of preloaded images
	I0717 10:53:05.292258    8143 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:53:05.292264    8143 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:53:05.292457    8143 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/ha-008000/config.json ...
	I0717 10:53:05.292473    8143 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/ha-008000/config.json: {Name:mk424bfd3eee2f1a86b01210d9f2d1b67f5f4285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:53:05.292764    8143 start.go:360] acquireMachinesLock for ha-008000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:53:05.292797    8143 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "ha-008000"
	I0717 10:53:05.292807    8143 start.go:93] Provisioning new machine with config: &{Name:ha-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:53:05.292850    8143 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:53:05.296133    8143 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 10:53:05.313421    8143 start.go:159] libmachine.API.Create for "ha-008000" (driver="qemu2")
	I0717 10:53:05.313453    8143 client.go:168] LocalClient.Create starting
	I0717 10:53:05.313507    8143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 10:53:05.313539    8143 main.go:141] libmachine: Decoding PEM data...
	I0717 10:53:05.313549    8143 main.go:141] libmachine: Parsing certificate...
	I0717 10:53:05.313588    8143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 10:53:05.313610    8143 main.go:141] libmachine: Decoding PEM data...
	I0717 10:53:05.313618    8143 main.go:141] libmachine: Parsing certificate...
	I0717 10:53:05.314073    8143 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:53:05.442765    8143 main.go:141] libmachine: Creating SSH key...
	I0717 10:53:05.510499    8143 main.go:141] libmachine: Creating Disk image...
	I0717 10:53:05.510503    8143 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:53:05.510668    8143 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:53:05.519956    8143 main.go:141] libmachine: STDOUT: 
	I0717 10:53:05.519973    8143 main.go:141] libmachine: STDERR: 
	I0717 10:53:05.520019    8143 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2 +20000M
	I0717 10:53:05.527826    8143 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:53:05.527839    8143 main.go:141] libmachine: STDERR: 
	I0717 10:53:05.527852    8143 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:53:05.527856    8143 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:53:05.527893    8143 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:53:05.527920    8143 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:2e:20:55:db:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:53:05.529496    8143 main.go:141] libmachine: STDOUT: 
	I0717 10:53:05.529510    8143 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:53:05.529527    8143 client.go:171] duration metric: took 216.076709ms to LocalClient.Create
	I0717 10:53:07.531665    8143 start.go:128] duration metric: took 2.238845917s to createHost
	I0717 10:53:07.531727    8143 start.go:83] releasing machines lock for "ha-008000", held for 2.238974041s
	W0717 10:53:07.531810    8143 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:53:07.543237    8143 out.go:177] * Deleting "ha-008000" in qemu2 ...
	W0717 10:53:07.569944    8143 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:53:07.569975    8143 start.go:729] Will try again in 5 seconds ...
	I0717 10:53:12.572063    8143 start.go:360] acquireMachinesLock for ha-008000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:53:12.572591    8143 start.go:364] duration metric: took 395.125µs to acquireMachinesLock for "ha-008000"
	I0717 10:53:12.572708    8143 start.go:93] Provisioning new machine with config: &{Name:ha-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.2 ClusterName:ha-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:53:12.573036    8143 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:53:12.582728    8143 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 10:53:12.633490    8143 start.go:159] libmachine.API.Create for "ha-008000" (driver="qemu2")
	I0717 10:53:12.633539    8143 client.go:168] LocalClient.Create starting
	I0717 10:53:12.633666    8143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 10:53:12.633740    8143 main.go:141] libmachine: Decoding PEM data...
	I0717 10:53:12.633758    8143 main.go:141] libmachine: Parsing certificate...
	I0717 10:53:12.633819    8143 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 10:53:12.633866    8143 main.go:141] libmachine: Decoding PEM data...
	I0717 10:53:12.633887    8143 main.go:141] libmachine: Parsing certificate...
	I0717 10:53:12.634406    8143 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:53:12.775878    8143 main.go:141] libmachine: Creating SSH key...
	I0717 10:53:12.862462    8143 main.go:141] libmachine: Creating Disk image...
	I0717 10:53:12.862468    8143 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:53:12.862637    8143 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:53:12.871902    8143 main.go:141] libmachine: STDOUT: 
	I0717 10:53:12.871922    8143 main.go:141] libmachine: STDERR: 
	I0717 10:53:12.871966    8143 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2 +20000M
	I0717 10:53:12.879802    8143 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:53:12.879814    8143 main.go:141] libmachine: STDERR: 
	I0717 10:53:12.879827    8143 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:53:12.879831    8143 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:53:12.879837    8143 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:53:12.879867    8143 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:14:6d:a3:db:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:53:12.881466    8143 main.go:141] libmachine: STDOUT: 
	I0717 10:53:12.881482    8143 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:53:12.881493    8143 client.go:171] duration metric: took 247.955167ms to LocalClient.Create
	I0717 10:53:14.883612    8143 start.go:128] duration metric: took 2.310599334s to createHost
	I0717 10:53:14.883657    8143 start.go:83] releasing machines lock for "ha-008000", held for 2.311099292s
	W0717 10:53:14.884133    8143 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-008000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-008000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:53:14.891635    8143 out.go:177] 
	W0717 10:53:14.896694    8143 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:53:14.896716    8143 out.go:239] * 
	* 
	W0717 10:53:14.899448    8143 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:53:14.907619    8143 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-008000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (68.0585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (102.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.733208ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-008000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- rollout status deployment/busybox: exit status 1 (56.512458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.682292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.1585ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.479208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.472334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.292917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.245ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.254583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.752166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.240834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.741042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.95725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.501167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.360083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.061291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.564541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (102.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-008000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.241042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-008000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.882625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-008000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-008000 -v=7 --alsologtostderr: exit status 83 (42.25625ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-008000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-008000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:57.289520    8265 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:57.290109    8265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.290113    8265 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:57.290115    8265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.290305    8265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:54:57.290540    8265 mustload.go:65] Loading cluster: ha-008000
	I0717 10:54:57.290743    8265 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:57.295780    8265 out.go:177] * The control-plane node ha-008000 host is not running: state=Stopped
	I0717 10:54:57.299770    8265 out.go:177]   To start a cluster, run: "minikube start -p ha-008000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-008000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.2395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-008000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-008000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.691125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-008000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-008000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-008000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (30.385125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-008000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-008000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.484708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status --output json -v=7 --alsologtostderr: exit status 7 (29.745041ms)

                                                
                                                
-- stdout --
	{"Name":"ha-008000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:57.493184    8277 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:57.493331    8277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.493335    8277 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:57.493337    8277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.493465    8277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:54:57.493576    8277 out.go:298] Setting JSON to true
	I0717 10:54:57.493588    8277 mustload.go:65] Loading cluster: ha-008000
	I0717 10:54:57.493653    8277 notify.go:220] Checking for updates...
	I0717 10:54:57.493794    8277 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:57.493800    8277 status.go:255] checking status of ha-008000 ...
	I0717 10:54:57.494007    8277 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:54:57.494011    8277 status.go:343] host is not running, skipping remaining checks
	I0717 10:54:57.494013    8277 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-008000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.5295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 node stop m02 -v=7 --alsologtostderr: exit status 85 (48.911834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:57.553520    8281 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:57.553904    8281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.553908    8281 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:57.553910    8281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.554078    8281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:54:57.554331    8281 mustload.go:65] Loading cluster: ha-008000
	I0717 10:54:57.554521    8281 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:57.559069    8281 out.go:177] 
	W0717 10:54:57.562112    8281 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0717 10:54:57.562117    8281 out.go:239] * 
	* 
	W0717 10:54:57.564269    8281 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:54:57.569004    8281 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-008000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (30.033666ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:57.602373    8283 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:57.602509    8283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.602512    8283 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:57.602515    8283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.602677    8283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:54:57.602811    8283 out.go:298] Setting JSON to false
	I0717 10:54:57.602821    8283 mustload.go:65] Loading cluster: ha-008000
	I0717 10:54:57.602892    8283 notify.go:220] Checking for updates...
	I0717 10:54:57.603028    8283 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:57.603037    8283 status.go:255] checking status of ha-008000 ...
	I0717 10:54:57.603240    8283 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:54:57.603244    8283 status.go:343] host is not running, skipping remaining checks
	I0717 10:54:57.603246    8283 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr": ha-008000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr": ha-008000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr": ha-008000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr": ha-008000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (30.035333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-008000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.62775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.845291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:57.738066    8292 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:57.738432    8292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.738437    8292 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:57.738440    8292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.738599    8292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:54:57.738841    8292 mustload.go:65] Loading cluster: ha-008000
	I0717 10:54:57.739039    8292 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:57.743111    8292 out.go:177] 
	W0717 10:54:57.745992    8292 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0717 10:54:57.745996    8292 out.go:239] * 
	* 
	W0717 10:54:57.747954    8292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:54:57.752097    8292 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0717 10:54:57.738066    8292 out.go:291] Setting OutFile to fd 1 ...
I0717 10:54:57.738432    8292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:54:57.738437    8292 out.go:304] Setting ErrFile to fd 2...
I0717 10:54:57.738440    8292 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:54:57.738599    8292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:54:57.738841    8292 mustload.go:65] Loading cluster: ha-008000
I0717 10:54:57.739039    8292 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:54:57.743111    8292 out.go:177] 
W0717 10:54:57.745992    8292 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0717 10:54:57.745996    8292 out.go:239] * 
* 
W0717 10:54:57.747954    8292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:54:57.752097    8292 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-008000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (29.630541ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:57.784953    8294 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:57.785122    8294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.785125    8294 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:57.785128    8294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:57.785260    8294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:54:57.785385    8294 out.go:298] Setting JSON to false
	I0717 10:54:57.785396    8294 mustload.go:65] Loading cluster: ha-008000
	I0717 10:54:57.785468    8294 notify.go:220] Checking for updates...
	I0717 10:54:57.785595    8294 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:57.785601    8294 status.go:255] checking status of ha-008000 ...
	I0717 10:54:57.785799    8294 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:54:57.785803    8294 status.go:343] host is not running, skipping remaining checks
	I0717 10:54:57.785805    8294 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (73.487875ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:54:58.641096    8296 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:54:58.641291    8296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:58.641295    8296 out.go:304] Setting ErrFile to fd 2...
	I0717 10:54:58.641299    8296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:54:58.641475    8296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:54:58.641645    8296 out.go:298] Setting JSON to false
	I0717 10:54:58.641657    8296 mustload.go:65] Loading cluster: ha-008000
	I0717 10:54:58.641698    8296 notify.go:220] Checking for updates...
	I0717 10:54:58.641914    8296 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:54:58.641922    8296 status.go:255] checking status of ha-008000 ...
	I0717 10:54:58.642204    8296 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:54:58.642209    8296 status.go:343] host is not running, skipping remaining checks
	I0717 10:54:58.642212    8296 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (72.352833ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:00.845658    8298 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:00.845845    8298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:00.845849    8298 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:00.845853    8298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:00.846042    8298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:00.846194    8298 out.go:298] Setting JSON to false
	I0717 10:55:00.846206    8298 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:00.846239    8298 notify.go:220] Checking for updates...
	I0717 10:55:00.846463    8298 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:00.846471    8298 status.go:255] checking status of ha-008000 ...
	I0717 10:55:00.846741    8298 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:55:00.846746    8298 status.go:343] host is not running, skipping remaining checks
	I0717 10:55:00.846749    8298 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (75.665ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:02.995467    8302 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:02.995698    8302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:02.995703    8302 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:02.995707    8302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:02.995939    8302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:02.996114    8302 out.go:298] Setting JSON to false
	I0717 10:55:02.996128    8302 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:02.996172    8302 notify.go:220] Checking for updates...
	I0717 10:55:02.996394    8302 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:02.996402    8302 status.go:255] checking status of ha-008000 ...
	I0717 10:55:02.996682    8302 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:55:02.996687    8302 status.go:343] host is not running, skipping remaining checks
	I0717 10:55:02.996690    8302 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (72.696083ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:05.472715    8306 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:05.472915    8306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:05.472919    8306 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:05.472922    8306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:05.473099    8306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:05.473253    8306 out.go:298] Setting JSON to false
	I0717 10:55:05.473266    8306 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:05.473302    8306 notify.go:220] Checking for updates...
	I0717 10:55:05.473517    8306 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:05.473524    8306 status.go:255] checking status of ha-008000 ...
	I0717 10:55:05.473829    8306 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:55:05.473834    8306 status.go:343] host is not running, skipping remaining checks
	I0717 10:55:05.473837    8306 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (72.043417ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:12.641421    8315 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:12.641665    8315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:12.641670    8315 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:12.641674    8315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:12.641840    8315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:12.642018    8315 out.go:298] Setting JSON to false
	I0717 10:55:12.642030    8315 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:12.642070    8315 notify.go:220] Checking for updates...
	I0717 10:55:12.642296    8315 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:12.642304    8315 status.go:255] checking status of ha-008000 ...
	I0717 10:55:12.642601    8315 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:55:12.642606    8315 status.go:343] host is not running, skipping remaining checks
	I0717 10:55:12.642609    8315 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (75.641958ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:20.707060    8323 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:20.707283    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:20.707288    8323 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:20.707291    8323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:20.707497    8323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:20.707689    8323 out.go:298] Setting JSON to false
	I0717 10:55:20.707702    8323 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:20.707739    8323 notify.go:220] Checking for updates...
	I0717 10:55:20.707963    8323 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:20.707971    8323 status.go:255] checking status of ha-008000 ...
	I0717 10:55:20.708260    8323 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:55:20.708265    8323 status.go:343] host is not running, skipping remaining checks
	I0717 10:55:20.708268    8323 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (73.933875ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:35.096070    8327 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:35.096295    8327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:35.096300    8327 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:35.096304    8327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:35.096474    8327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:35.096657    8327 out.go:298] Setting JSON to false
	I0717 10:55:35.096670    8327 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:35.096714    8327 notify.go:220] Checking for updates...
	I0717 10:55:35.096934    8327 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:35.096943    8327 status.go:255] checking status of ha-008000 ...
	I0717 10:55:35.097220    8327 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:55:35.097226    8327 status.go:343] host is not running, skipping remaining checks
	I0717 10:55:35.097229    8327 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (73.357166ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:47.631986    8336 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:47.632206    8336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:47.632211    8336 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:47.632214    8336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:47.632419    8336 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:47.632584    8336 out.go:298] Setting JSON to false
	I0717 10:55:47.632597    8336 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:47.632642    8336 notify.go:220] Checking for updates...
	I0717 10:55:47.632897    8336 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:47.632905    8336 status.go:255] checking status of ha-008000 ...
	I0717 10:55:47.633202    8336 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:55:47.633207    8336 status.go:343] host is not running, skipping remaining checks
	I0717 10:55:47.633210    8336 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (32.366375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (49.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-008000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-008000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (30.383875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-008000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-008000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-008000 -v=7 --alsologtostderr: (3.520491208s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-008000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-008000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.217236875s)

                                                
                                                
-- stdout --
	* [ha-008000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-008000" primary control-plane node in "ha-008000" cluster
	* Restarting existing qemu2 VM for "ha-008000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-008000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:51.355689    8365 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:51.355858    8365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:51.355866    8365 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:51.355869    8365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:51.356058    8365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:51.357307    8365 out.go:298] Setting JSON to false
	I0717 10:55:51.376558    8365 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5119,"bootTime":1721233832,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:55:51.376619    8365 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:55:51.380434    8365 out.go:177] * [ha-008000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:55:51.388280    8365 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:55:51.388347    8365 notify.go:220] Checking for updates...
	I0717 10:55:51.395197    8365 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:55:51.398258    8365 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:55:51.401288    8365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:55:51.404285    8365 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:55:51.407261    8365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:55:51.410545    8365 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:51.410605    8365 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:55:51.415209    8365 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:55:51.422256    8365 start.go:297] selected driver: qemu2
	I0717 10:55:51.422262    8365 start.go:901] validating driver "qemu2" against &{Name:ha-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:55:51.422308    8365 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:55:51.424789    8365 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:55:51.424816    8365 cni.go:84] Creating CNI manager for ""
	I0717 10:55:51.424821    8365 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 10:55:51.424873    8365 start.go:340] cluster config:
	{Name:ha-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-008000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:55:51.428685    8365 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:55:51.436193    8365 out.go:177] * Starting "ha-008000" primary control-plane node in "ha-008000" cluster
	I0717 10:55:51.440279    8365 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:55:51.440293    8365 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:55:51.440301    8365 cache.go:56] Caching tarball of preloaded images
	I0717 10:55:51.440353    8365 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:55:51.440359    8365 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:55:51.440417    8365 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/ha-008000/config.json ...
	I0717 10:55:51.440827    8365 start.go:360] acquireMachinesLock for ha-008000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:55:51.440866    8365 start.go:364] duration metric: took 32.459µs to acquireMachinesLock for "ha-008000"
	I0717 10:55:51.440876    8365 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:55:51.440882    8365 fix.go:54] fixHost starting: 
	I0717 10:55:51.441014    8365 fix.go:112] recreateIfNeeded on ha-008000: state=Stopped err=<nil>
	W0717 10:55:51.441023    8365 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:55:51.445263    8365 out.go:177] * Restarting existing qemu2 VM for "ha-008000" ...
	I0717 10:55:51.449250    8365 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:55:51.449291    8365 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:14:6d:a3:db:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:55:51.451320    8365 main.go:141] libmachine: STDOUT: 
	I0717 10:55:51.451342    8365 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:55:51.451371    8365 fix.go:56] duration metric: took 10.489709ms for fixHost
	I0717 10:55:51.451375    8365 start.go:83] releasing machines lock for "ha-008000", held for 10.504875ms
	W0717 10:55:51.451390    8365 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:55:51.451427    8365 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:55:51.451432    8365 start.go:729] Will try again in 5 seconds ...
	I0717 10:55:56.453492    8365 start.go:360] acquireMachinesLock for ha-008000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:55:56.453925    8365 start.go:364] duration metric: took 328.625µs to acquireMachinesLock for "ha-008000"
	I0717 10:55:56.454063    8365 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:55:56.454082    8365 fix.go:54] fixHost starting: 
	I0717 10:55:56.454795    8365 fix.go:112] recreateIfNeeded on ha-008000: state=Stopped err=<nil>
	W0717 10:55:56.454822    8365 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:55:56.463334    8365 out.go:177] * Restarting existing qemu2 VM for "ha-008000" ...
	I0717 10:55:56.467360    8365 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:55:56.467650    8365 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:14:6d:a3:db:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:55:56.476842    8365 main.go:141] libmachine: STDOUT: 
	I0717 10:55:56.476946    8365 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:55:56.477058    8365 fix.go:56] duration metric: took 22.974667ms for fixHost
	I0717 10:55:56.477080    8365 start.go:83] releasing machines lock for "ha-008000", held for 23.132958ms
	W0717 10:55:56.477289    8365 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-008000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-008000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:55:56.484394    8365 out.go:177] 
	W0717 10:55:56.488453    8365 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:55:56.488516    8365 out.go:239] * 
	* 
	W0717 10:55:56.491340    8365 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:55:56.499359    8365 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-008000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-008000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (33.352958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 node delete m03 -v=7 --alsologtostderr: exit status 83 (36.994584ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-008000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-008000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:56.641261    8383 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:56.641649    8383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:56.641653    8383 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:56.641655    8383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:56.641822    8383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:56.642052    8383 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:56.642236    8383 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:56.644140    8383 out.go:177] * The control-plane node ha-008000 host is not running: state=Stopped
	I0717 10:55:56.646898    8383 out.go:177]   To start a cluster, run: "minikube start -p ha-008000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-008000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (29.889916ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:55:56.678958    8385 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:55:56.679112    8385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:56.679115    8385 out.go:304] Setting ErrFile to fd 2...
	I0717 10:55:56.679117    8385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:55:56.679244    8385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:55:56.679362    8385 out.go:298] Setting JSON to false
	I0717 10:55:56.679372    8385 mustload.go:65] Loading cluster: ha-008000
	I0717 10:55:56.679436    8385 notify.go:220] Checking for updates...
	I0717 10:55:56.679584    8385 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:55:56.679590    8385 status.go:255] checking status of ha-008000 ...
	I0717 10:55:56.679782    8385 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:55:56.679785    8385 status.go:343] host is not running, skipping remaining checks
	I0717 10:55:56.679788    8385 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.059667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-008000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.463209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-008000 stop -v=7 --alsologtostderr: (3.36903125s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr: exit status 7 (67.098625ms)

                                                
                                                
-- stdout --
	ha-008000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:56:00.221011    8412 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:56:00.221210    8412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:56:00.221215    8412 out.go:304] Setting ErrFile to fd 2...
	I0717 10:56:00.221218    8412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:56:00.221382    8412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:56:00.221542    8412 out.go:298] Setting JSON to false
	I0717 10:56:00.221557    8412 mustload.go:65] Loading cluster: ha-008000
	I0717 10:56:00.221602    8412 notify.go:220] Checking for updates...
	I0717 10:56:00.221814    8412 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:56:00.221822    8412 status.go:255] checking status of ha-008000 ...
	I0717 10:56:00.222113    8412 status.go:330] ha-008000 host status = "Stopped" (err=<nil>)
	I0717 10:56:00.222118    8412 status.go:343] host is not running, skipping remaining checks
	I0717 10:56:00.222121    8412 status.go:257] ha-008000 status: &{Name:ha-008000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr": ha-008000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr": ha-008000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-008000 status -v=7 --alsologtostderr": ha-008000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (32.559292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-008000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-008000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.187989583s)

                                                
                                                
-- stdout --
	* [ha-008000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-008000" primary control-plane node in "ha-008000" cluster
	* Restarting existing qemu2 VM for "ha-008000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-008000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:56:00.284330    8416 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:56:00.284469    8416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:56:00.284472    8416 out.go:304] Setting ErrFile to fd 2...
	I0717 10:56:00.284475    8416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:56:00.284592    8416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:56:00.285580    8416 out.go:298] Setting JSON to false
	I0717 10:56:00.301543    8416 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5128,"bootTime":1721233832,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:56:00.301619    8416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:56:00.307078    8416 out.go:177] * [ha-008000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:56:00.313093    8416 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:56:00.313137    8416 notify.go:220] Checking for updates...
	I0717 10:56:00.320027    8416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:56:00.322975    8416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:56:00.325967    8416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:56:00.328972    8416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:56:00.332034    8416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:56:00.335299    8416 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:56:00.335556    8416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:56:00.339910    8416 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:56:00.347041    8416 start.go:297] selected driver: qemu2
	I0717 10:56:00.347050    8416 start.go:901] validating driver "qemu2" against &{Name:ha-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.2 ClusterName:ha-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:56:00.347129    8416 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:56:00.349241    8416 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:56:00.349303    8416 cni.go:84] Creating CNI manager for ""
	I0717 10:56:00.349308    8416 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 10:56:00.349350    8416 start.go:340] cluster config:
	{Name:ha-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-008000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:56:00.352666    8416 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:56:00.359882    8416 out.go:177] * Starting "ha-008000" primary control-plane node in "ha-008000" cluster
	I0717 10:56:00.363958    8416 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:56:00.363974    8416 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:56:00.363986    8416 cache.go:56] Caching tarball of preloaded images
	I0717 10:56:00.364041    8416 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:56:00.364046    8416 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:56:00.364115    8416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/ha-008000/config.json ...
	I0717 10:56:00.364616    8416 start.go:360] acquireMachinesLock for ha-008000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:56:00.364645    8416 start.go:364] duration metric: took 22.25µs to acquireMachinesLock for "ha-008000"
	I0717 10:56:00.364653    8416 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:56:00.364659    8416 fix.go:54] fixHost starting: 
	I0717 10:56:00.364790    8416 fix.go:112] recreateIfNeeded on ha-008000: state=Stopped err=<nil>
	W0717 10:56:00.364798    8416 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:56:00.368004    8416 out.go:177] * Restarting existing qemu2 VM for "ha-008000" ...
	I0717 10:56:00.376013    8416 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:56:00.376048    8416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:14:6d:a3:db:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:56:00.378065    8416 main.go:141] libmachine: STDOUT: 
	I0717 10:56:00.378085    8416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:56:00.378114    8416 fix.go:56] duration metric: took 13.45525ms for fixHost
	I0717 10:56:00.378118    8416 start.go:83] releasing machines lock for "ha-008000", held for 13.47ms
	W0717 10:56:00.378125    8416 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:56:00.378173    8416 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:56:00.378178    8416 start.go:729] Will try again in 5 seconds ...
	I0717 10:56:05.380309    8416 start.go:360] acquireMachinesLock for ha-008000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:56:05.380720    8416 start.go:364] duration metric: took 312.792µs to acquireMachinesLock for "ha-008000"
	I0717 10:56:05.380841    8416 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:56:05.380859    8416 fix.go:54] fixHost starting: 
	I0717 10:56:05.381572    8416 fix.go:112] recreateIfNeeded on ha-008000: state=Stopped err=<nil>
	W0717 10:56:05.381600    8416 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:56:05.386210    8416 out.go:177] * Restarting existing qemu2 VM for "ha-008000" ...
	I0717 10:56:05.396079    8416 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:56:05.396392    8416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:14:6d:a3:db:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/ha-008000/disk.qcow2
	I0717 10:56:05.405533    8416 main.go:141] libmachine: STDOUT: 
	I0717 10:56:05.405623    8416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:56:05.405721    8416 fix.go:56] duration metric: took 24.857708ms for fixHost
	I0717 10:56:05.405742    8416 start.go:83] releasing machines lock for "ha-008000", held for 24.99925ms
	W0717 10:56:05.405968    8416 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-008000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-008000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:56:05.415097    8416 out.go:177] 
	W0717 10:56:05.422276    8416 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:56:05.422320    8416 out.go:239] * 
	* 
	W0717 10:56:05.424940    8416 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:56:05.434760    8416 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-008000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (68.453458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-008000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (30.141208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-008000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-008000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.366208ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-008000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-008000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:56:05.620468    8433 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:56:05.620633    8433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:56:05.620636    8433 out.go:304] Setting ErrFile to fd 2...
	I0717 10:56:05.620639    8433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:56:05.620765    8433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:56:05.620996    8433 mustload.go:65] Loading cluster: ha-008000
	I0717 10:56:05.621178    8433 config.go:182] Loaded profile config "ha-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:56:05.624986    8433 out.go:177] * The control-plane node ha-008000 host is not running: state=Stopped
	I0717 10:56:05.628797    8433 out.go:177]   To start a cluster, run: "minikube start -p ha-008000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-008000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.567958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-008000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-008000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-008000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"ha-008000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-008000 -n ha-008000: exit status 7 (29.716583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-008000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-721000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-721000 --driver=qemu2 : exit status 80 (9.7993695s)

                                                
                                                
-- stdout --
	* [image-721000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-721000" primary control-plane node in "image-721000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-721000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-721000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-721000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-721000 -n image-721000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-721000 -n image-721000: exit status 7 (67.503875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-721000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-968000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-968000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.737662375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"89d6796d-efa3-4791-ac6a-91c01d3cf681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-968000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ff305f0-535e-4018-8eb0-b88bc969f636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19283"}}
	{"specversion":"1.0","id":"74f2a761-130f-4f0f-8683-0f5413c86d67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig"}}
	{"specversion":"1.0","id":"a20ad26a-73da-4b82-bcfb-97e123f2142a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b1645f89-0573-4e9b-a5ab-972c938e14b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"81bada19-9274-4fb6-91a4-b0b20ebdc5af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube"}}
	{"specversion":"1.0","id":"09341968-2d26-4021-8083-a34594679b67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"022401e1-eec8-4928-8b48-ef55f51370af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f234a33-3bfd-4087-86dc-0db3556e9a0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"474d8763-1969-46ad-b570-5188aa5c8b4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-968000\" primary control-plane node in \"json-output-968000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2ad6ca3-ea6c-47b8-b9f0-fcaa4b61a885","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"11162e07-d160-46aa-8d48-af916cad7041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-968000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d0d2f65-33ee-42ff-b2ec-bd03fa0bbad2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a478db9b-2631-49d5-a258-18290476d9c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"39797721-5187-409d-a225-ae6dae3c48f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-968000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"cc32dae0-b893-4bcc-a881-69bc617e67c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"5d6848b9-4670-4cc1-b064-a1a1efcfa758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-968000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-968000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-968000 --output=json --user=testUser: exit status 83 (78.7145ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"16fe885a-ba04-40f5-8584-888f721765f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-968000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"0afbda72-1335-44f5-bc1d-885d697b9f3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-968000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-968000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-968000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-968000 --output=json --user=testUser: exit status 83 (44.316584ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-968000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-968000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-968000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-054000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-054000 --driver=qemu2 : exit status 80 (9.800299625s)

                                                
                                                
-- stdout --
	* [first-054000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-054000" primary control-plane node in "first-054000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-054000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-054000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-054000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-17 10:56:37.663748 -0700 PDT m=+454.147392542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-055000 -n second-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-055000 -n second-055000: exit status 85 (80.660666ms)

                                                
                                                
-- stdout --
	* Profile "second-055000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-055000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-055000" host is not running, skipping log retrieval (state="* Profile \"second-055000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-055000\"")
helpers_test.go:175: Cleaning up "second-055000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-055000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-17 10:56:37.850709 -0700 PDT m=+454.334357626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-054000 -n first-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-054000 -n first-054000: exit status 7 (30.32525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-054000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-054000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-054000
--- FAIL: TestMinikubeProfile (10.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-615000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-615000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.794209084s)

                                                
                                                
-- stdout --
	* [mount-start-1-615000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-615000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-615000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-615000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-615000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-615000 -n mount-start-1-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-615000 -n mount-start-1-615000: exit status 7 (68.620667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-615000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.86s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-934000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-934000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.746450708s)

                                                
                                                
-- stdout --
	* [multinode-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-934000" primary control-plane node in "multinode-934000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-934000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:56:48.019384    8592 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:56:48.019646    8592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:56:48.019652    8592 out.go:304] Setting ErrFile to fd 2...
	I0717 10:56:48.019655    8592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:56:48.019845    8592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:56:48.021234    8592 out.go:298] Setting JSON to false
	I0717 10:56:48.037616    8592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5176,"bootTime":1721233832,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:56:48.037677    8592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:56:48.043239    8592 out.go:177] * [multinode-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:56:48.049092    8592 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:56:48.049129    8592 notify.go:220] Checking for updates...
	I0717 10:56:48.056038    8592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:56:48.059093    8592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:56:48.062088    8592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:56:48.063521    8592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:56:48.067034    8592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:56:48.070295    8592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:56:48.073916    8592 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 10:56:48.081059    8592 start.go:297] selected driver: qemu2
	I0717 10:56:48.081065    8592 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:56:48.081071    8592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:56:48.083435    8592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:56:48.088169    8592 out.go:177] * Automatically selected the socket_vmnet network
	I0717 10:56:48.091120    8592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:56:48.091149    8592 cni.go:84] Creating CNI manager for ""
	I0717 10:56:48.091154    8592 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 10:56:48.091157    8592 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 10:56:48.091189    8592 start.go:340] cluster config:
	{Name:multinode-934000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-934000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:56:48.095006    8592 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:56:48.102113    8592 out.go:177] * Starting "multinode-934000" primary control-plane node in "multinode-934000" cluster
	I0717 10:56:48.105924    8592 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:56:48.105940    8592 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:56:48.105955    8592 cache.go:56] Caching tarball of preloaded images
	I0717 10:56:48.106003    8592 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:56:48.106008    8592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:56:48.106222    8592 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/multinode-934000/config.json ...
	I0717 10:56:48.106235    8592 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/multinode-934000/config.json: {Name:mkd108e280269657bba5f1423fc03b0881234722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:56:48.106459    8592 start.go:360] acquireMachinesLock for multinode-934000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:56:48.106495    8592 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "multinode-934000"
	I0717 10:56:48.106506    8592 start.go:93] Provisioning new machine with config: &{Name:multinode-934000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-934000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:56:48.106554    8592 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:56:48.115084    8592 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 10:56:48.132790    8592 start.go:159] libmachine.API.Create for "multinode-934000" (driver="qemu2")
	I0717 10:56:48.132819    8592 client.go:168] LocalClient.Create starting
	I0717 10:56:48.132884    8592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 10:56:48.132915    8592 main.go:141] libmachine: Decoding PEM data...
	I0717 10:56:48.132923    8592 main.go:141] libmachine: Parsing certificate...
	I0717 10:56:48.132965    8592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 10:56:48.132994    8592 main.go:141] libmachine: Decoding PEM data...
	I0717 10:56:48.133002    8592 main.go:141] libmachine: Parsing certificate...
	I0717 10:56:48.133380    8592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:56:48.261174    8592 main.go:141] libmachine: Creating SSH key...
	I0717 10:56:48.349419    8592 main.go:141] libmachine: Creating Disk image...
	I0717 10:56:48.349425    8592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:56:48.349582    8592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:56:48.358557    8592 main.go:141] libmachine: STDOUT: 
	I0717 10:56:48.358586    8592 main.go:141] libmachine: STDERR: 
	I0717 10:56:48.358631    8592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2 +20000M
	I0717 10:56:48.366561    8592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:56:48.366575    8592 main.go:141] libmachine: STDERR: 
	I0717 10:56:48.366599    8592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:56:48.366604    8592 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:56:48.366617    8592 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:56:48.366644    8592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b9:58:8d:0d:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:56:48.368271    8592 main.go:141] libmachine: STDOUT: 
	I0717 10:56:48.368286    8592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:56:48.368314    8592 client.go:171] duration metric: took 235.497292ms to LocalClient.Create
	I0717 10:56:50.370434    8592 start.go:128] duration metric: took 2.263918167s to createHost
	I0717 10:56:50.370501    8592 start.go:83] releasing machines lock for "multinode-934000", held for 2.264048792s
	W0717 10:56:50.370611    8592 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:56:50.381563    8592 out.go:177] * Deleting "multinode-934000" in qemu2 ...
	W0717 10:56:50.406538    8592 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:56:50.406569    8592 start.go:729] Will try again in 5 seconds ...
	I0717 10:56:55.408587    8592 start.go:360] acquireMachinesLock for multinode-934000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:56:55.409052    8592 start.go:364] duration metric: took 371.209µs to acquireMachinesLock for "multinode-934000"
	I0717 10:56:55.409192    8592 start.go:93] Provisioning new machine with config: &{Name:multinode-934000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:multinode-934000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 10:56:55.409509    8592 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 10:56:55.421089    8592 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 10:56:55.471500    8592 start.go:159] libmachine.API.Create for "multinode-934000" (driver="qemu2")
	I0717 10:56:55.471548    8592 client.go:168] LocalClient.Create starting
	I0717 10:56:55.471659    8592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 10:56:55.471737    8592 main.go:141] libmachine: Decoding PEM data...
	I0717 10:56:55.471751    8592 main.go:141] libmachine: Parsing certificate...
	I0717 10:56:55.471821    8592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 10:56:55.471865    8592 main.go:141] libmachine: Decoding PEM data...
	I0717 10:56:55.471882    8592 main.go:141] libmachine: Parsing certificate...
	I0717 10:56:55.472387    8592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 10:56:55.611439    8592 main.go:141] libmachine: Creating SSH key...
	I0717 10:56:55.672531    8592 main.go:141] libmachine: Creating Disk image...
	I0717 10:56:55.672543    8592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 10:56:55.672687    8592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:56:55.682001    8592 main.go:141] libmachine: STDOUT: 
	I0717 10:56:55.682021    8592 main.go:141] libmachine: STDERR: 
	I0717 10:56:55.682070    8592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2 +20000M
	I0717 10:56:55.689943    8592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 10:56:55.689964    8592 main.go:141] libmachine: STDERR: 
	I0717 10:56:55.689974    8592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:56:55.689980    8592 main.go:141] libmachine: Starting QEMU VM...
	I0717 10:56:55.689987    8592 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:56:55.690027    8592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:25:d0:37:37:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:56:55.691682    8592 main.go:141] libmachine: STDOUT: 
	I0717 10:56:55.691700    8592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:56:55.691711    8592 client.go:171] duration metric: took 220.162291ms to LocalClient.Create
	I0717 10:56:57.693829    8592 start.go:128] duration metric: took 2.284347958s to createHost
	I0717 10:56:57.693874    8592 start.go:83] releasing machines lock for "multinode-934000", held for 2.284836958s
	W0717 10:56:57.694197    8592 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-934000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-934000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:56:57.705835    8592 out.go:177] 
	W0717 10:56:57.710003    8592 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:56:57.710048    8592 out.go:239] * 
	* 
	W0717 10:56:57.712670    8592 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:56:57.722809    8592 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-934000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (70.715709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (112.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.56275ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-934000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- rollout status deployment/busybox: exit status 1 (56.571375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.005083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.555416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.87175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.05775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.195458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.305334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.908417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.446875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.392042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.664291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.808625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.583916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.9465ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.907375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.846709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (29.966666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (112.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-934000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.107625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (30.119916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-934000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-934000 -v 3 --alsologtostderr: exit status 83 (40.959292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-934000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-934000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:49.991721    8723 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:49.991873    8723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:49.991876    8723 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:49.991879    8723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:49.992026    8723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:49.992278    8723 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:49.992458    8723 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:49.996320    8723 out.go:177] * The control-plane node multinode-934000 host is not running: state=Stopped
	I0717 10:58:50.000269    8723 out.go:177]   To start a cluster, run: "minikube start -p multinode-934000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-934000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (29.741459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-934000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-934000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.2915ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-934000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-934000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-934000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (29.399958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-934000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-934000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-934000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.2\",\"ClusterName\":\"multinode-934000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (29.926292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status --output json --alsologtostderr: exit status 7 (29.996542ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-934000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:50.195376    8735 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:50.195523    8735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:50.195526    8735 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:50.195529    8735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:50.195673    8735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:50.195792    8735 out.go:298] Setting JSON to true
	I0717 10:58:50.195801    8735 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:50.195864    8735 notify.go:220] Checking for updates...
	I0717 10:58:50.196002    8735 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:50.196008    8735 status.go:255] checking status of multinode-934000 ...
	I0717 10:58:50.196204    8735 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:58:50.196208    8735 status.go:343] host is not running, skipping remaining checks
	I0717 10:58:50.196210    8735 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-934000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (29.463666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 node stop m03: exit status 85 (46.550917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-934000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status: exit status 7 (29.32825ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr: exit status 7 (29.936042ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:50.330937    8743 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:50.331315    8743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:50.331320    8743 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:50.331322    8743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:50.331514    8743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:50.331660    8743 out.go:298] Setting JSON to false
	I0717 10:58:50.331670    8743 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:50.331849    8743 notify.go:220] Checking for updates...
	I0717 10:58:50.332052    8743 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:50.332060    8743 status.go:255] checking status of multinode-934000 ...
	I0717 10:58:50.332274    8743 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:58:50.332278    8743 status.go:343] host is not running, skipping remaining checks
	I0717 10:58:50.332280    8743 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr": multinode-934000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (30.209625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (56.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.821792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:50.391641    8747 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:50.392013    8747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:50.392017    8747 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:50.392019    8747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:50.392158    8747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:50.392384    8747 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:50.392556    8747 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:50.395862    8747 out.go:177] 
	W0717 10:58:50.398848    8747 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0717 10:58:50.398853    8747 out.go:239] * 
	* 
	W0717 10:58:50.400737    8747 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:58:50.404851    8747 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0717 10:58:50.391641    8747 out.go:291] Setting OutFile to fd 1 ...
I0717 10:58:50.392013    8747 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:58:50.392017    8747 out.go:304] Setting ErrFile to fd 2...
I0717 10:58:50.392019    8747 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:58:50.392158    8747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
I0717 10:58:50.392384    8747 mustload.go:65] Loading cluster: multinode-934000
I0717 10:58:50.392556    8747 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:58:50.395862    8747 out.go:177] 
W0717 10:58:50.398848    8747 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0717 10:58:50.398853    8747 out.go:239] * 
* 
W0717 10:58:50.400737    8747 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:58:50.404851    8747 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-934000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (30.220875ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:50.438281    8749 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:50.438409    8749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:50.438412    8749 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:50.438414    8749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:50.438545    8749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:50.438673    8749 out.go:298] Setting JSON to false
	I0717 10:58:50.438685    8749 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:50.438745    8749 notify.go:220] Checking for updates...
	I0717 10:58:50.438887    8749 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:50.438896    8749 status.go:255] checking status of multinode-934000 ...
	I0717 10:58:50.439107    8749 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:58:50.439111    8749 status.go:343] host is not running, skipping remaining checks
	I0717 10:58:50.439114    8749 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (71.930542ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:51.556871    8753 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:51.557082    8753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:51.557086    8753 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:51.557089    8753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:51.557269    8753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:51.557439    8753 out.go:298] Setting JSON to false
	I0717 10:58:51.557451    8753 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:51.557492    8753 notify.go:220] Checking for updates...
	I0717 10:58:51.557708    8753 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:51.557718    8753 status.go:255] checking status of multinode-934000 ...
	I0717 10:58:51.557995    8753 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:58:51.558000    8753 status.go:343] host is not running, skipping remaining checks
	I0717 10:58:51.558003    8753 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (73.607125ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:52.449867    8755 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:52.450061    8755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:52.450065    8755 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:52.450068    8755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:52.450221    8755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:52.450377    8755 out.go:298] Setting JSON to false
	I0717 10:58:52.450389    8755 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:52.450426    8755 notify.go:220] Checking for updates...
	I0717 10:58:52.450647    8755 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:52.450654    8755 status.go:255] checking status of multinode-934000 ...
	I0717 10:58:52.450936    8755 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:58:52.450941    8755 status.go:343] host is not running, skipping remaining checks
	I0717 10:58:52.450943    8755 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (73.068666ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:54.462526    8757 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:54.462719    8757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:54.462723    8757 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:54.462726    8757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:54.462917    8757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:54.463075    8757 out.go:298] Setting JSON to false
	I0717 10:58:54.463088    8757 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:54.463131    8757 notify.go:220] Checking for updates...
	I0717 10:58:54.463342    8757 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:54.463350    8757 status.go:255] checking status of multinode-934000 ...
	I0717 10:58:54.463606    8757 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:58:54.463611    8757 status.go:343] host is not running, skipping remaining checks
	I0717 10:58:54.463614    8757 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (73.39475ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:58:58.542519    8759 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:58:58.542736    8759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:58.542741    8759 out.go:304] Setting ErrFile to fd 2...
	I0717 10:58:58.542745    8759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:58:58.542925    8759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:58:58.543077    8759 out.go:298] Setting JSON to false
	I0717 10:58:58.543089    8759 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:58:58.543136    8759 notify.go:220] Checking for updates...
	I0717 10:58:58.543351    8759 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:58:58.543358    8759 status.go:255] checking status of multinode-934000 ...
	I0717 10:58:58.543641    8759 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:58:58.543645    8759 status.go:343] host is not running, skipping remaining checks
	I0717 10:58:58.543648    8759 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (76.170542ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:03.620229    8764 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:03.620422    8764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:03.620431    8764 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:03.620435    8764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:03.620621    8764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:59:03.620820    8764 out.go:298] Setting JSON to false
	I0717 10:59:03.620834    8764 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:59:03.620885    8764 notify.go:220] Checking for updates...
	I0717 10:59:03.621124    8764 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:03.621132    8764 status.go:255] checking status of multinode-934000 ...
	I0717 10:59:03.621408    8764 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:59:03.621413    8764 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:03.621416    8764 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (73.281625ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:09.501812    8768 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:09.502008    8768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:09.502013    8768 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:09.502016    8768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:09.502195    8768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:59:09.502363    8768 out.go:298] Setting JSON to false
	I0717 10:59:09.502378    8768 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:59:09.502420    8768 notify.go:220] Checking for updates...
	I0717 10:59:09.502633    8768 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:09.502649    8768 status.go:255] checking status of multinode-934000 ...
	I0717 10:59:09.502944    8768 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:59:09.502949    8768 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:09.502952    8768 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (73.550417ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:22.039198    8776 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:22.039395    8776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:22.039399    8776 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:22.039402    8776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:22.039576    8776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:59:22.039731    8776 out.go:298] Setting JSON to false
	I0717 10:59:22.039743    8776 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:59:22.039785    8776 notify.go:220] Checking for updates...
	I0717 10:59:22.039987    8776 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:22.039995    8776 status.go:255] checking status of multinode-934000 ...
	I0717 10:59:22.040292    8776 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:59:22.040296    8776 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:22.040299    8776 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr: exit status 7 (73.423375ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:46.644310    8796 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:46.644513    8796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:46.644517    8796 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:46.644520    8796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:46.644691    8796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:59:46.644852    8796 out.go:298] Setting JSON to false
	I0717 10:59:46.644865    8796 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:59:46.644904    8796 notify.go:220] Checking for updates...
	I0717 10:59:46.645138    8796 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:46.645146    8796 status.go:255] checking status of multinode-934000 ...
	I0717 10:59:46.645426    8796 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:59:46.645431    8796 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:46.645433    8796 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-934000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (32.417333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (56.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-934000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-934000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-934000: (3.431803833s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-934000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-934000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.217230167s)

                                                
                                                
-- stdout --
	* [multinode-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-934000" primary control-plane node in "multinode-934000" cluster
	* Restarting existing qemu2 VM for "multinode-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:50.202323    8820 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:50.202479    8820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:50.202483    8820 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:50.202486    8820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:50.202624    8820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:59:50.203793    8820 out.go:298] Setting JSON to false
	I0717 10:59:50.222427    8820 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5358,"bootTime":1721233832,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:59:50.222505    8820 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:59:50.227817    8820 out.go:177] * [multinode-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:59:50.235720    8820 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:59:50.235772    8820 notify.go:220] Checking for updates...
	I0717 10:59:50.241039    8820 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:59:50.243756    8820 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:59:50.246728    8820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:59:50.249767    8820 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:59:50.252675    8820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:59:50.256007    8820 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:50.256072    8820 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:59:50.260704    8820 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:59:50.267736    8820 start.go:297] selected driver: qemu2
	I0717 10:59:50.267745    8820 start.go:901] validating driver "qemu2" against &{Name:multinode-934000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-934000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:59:50.267803    8820 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:59:50.270208    8820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:59:50.270252    8820 cni.go:84] Creating CNI manager for ""
	I0717 10:59:50.270258    8820 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 10:59:50.270311    8820 start.go:340] cluster config:
	{Name:multinode-934000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-934000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:59:50.273894    8820 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:59:50.281717    8820 out.go:177] * Starting "multinode-934000" primary control-plane node in "multinode-934000" cluster
	I0717 10:59:50.285660    8820 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:59:50.285674    8820 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:59:50.285680    8820 cache.go:56] Caching tarball of preloaded images
	I0717 10:59:50.285741    8820 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:59:50.285747    8820 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:59:50.285801    8820 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/multinode-934000/config.json ...
	I0717 10:59:50.286241    8820 start.go:360] acquireMachinesLock for multinode-934000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:59:50.286276    8820 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "multinode-934000"
	I0717 10:59:50.286285    8820 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:59:50.286292    8820 fix.go:54] fixHost starting: 
	I0717 10:59:50.286408    8820 fix.go:112] recreateIfNeeded on multinode-934000: state=Stopped err=<nil>
	W0717 10:59:50.286419    8820 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:59:50.294731    8820 out.go:177] * Restarting existing qemu2 VM for "multinode-934000" ...
	I0717 10:59:50.297704    8820 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:59:50.297750    8820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:25:d0:37:37:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:59:50.299874    8820 main.go:141] libmachine: STDOUT: 
	I0717 10:59:50.299894    8820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:59:50.299920    8820 fix.go:56] duration metric: took 13.628959ms for fixHost
	I0717 10:59:50.299926    8820 start.go:83] releasing machines lock for "multinode-934000", held for 13.645166ms
	W0717 10:59:50.299931    8820 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:59:50.299967    8820 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:59:50.299972    8820 start.go:729] Will try again in 5 seconds ...
	I0717 10:59:55.302093    8820 start.go:360] acquireMachinesLock for multinode-934000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:59:55.302501    8820 start.go:364] duration metric: took 312.542µs to acquireMachinesLock for "multinode-934000"
	I0717 10:59:55.302653    8820 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:59:55.302674    8820 fix.go:54] fixHost starting: 
	I0717 10:59:55.303433    8820 fix.go:112] recreateIfNeeded on multinode-934000: state=Stopped err=<nil>
	W0717 10:59:55.303460    8820 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:59:55.309337    8820 out.go:177] * Restarting existing qemu2 VM for "multinode-934000" ...
	I0717 10:59:55.313955    8820 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:59:55.314168    8820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:25:d0:37:37:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:59:55.323574    8820 main.go:141] libmachine: STDOUT: 
	I0717 10:59:55.323637    8820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:59:55.323719    8820 fix.go:56] duration metric: took 21.047417ms for fixHost
	I0717 10:59:55.323736    8820 start.go:83] releasing machines lock for "multinode-934000", held for 21.213333ms
	W0717 10:59:55.323889    8820 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:59:55.330902    8820 out.go:177] 
	W0717 10:59:55.335080    8820 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:59:55.335125    8820 out.go:239] * 
	* 
	W0717 10:59:55.337843    8820 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:59:55.344975    8820 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-934000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-934000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (32.585292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 node delete m03: exit status 83 (41.789375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-934000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-934000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-934000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr: exit status 7 (28.901833ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:55.531665    8839 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:55.531807    8839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:55.531811    8839 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:55.531813    8839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:55.532036    8839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:59:55.532160    8839 out.go:298] Setting JSON to false
	I0717 10:59:55.532169    8839 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:59:55.532215    8839 notify.go:220] Checking for updates...
	I0717 10:59:55.532363    8839 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:55.532369    8839 status.go:255] checking status of multinode-934000 ...
	I0717 10:59:55.532563    8839 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:59:55.532567    8839 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:55.532570    8839 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (29.03075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-934000 stop: (2.956954042s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status: exit status 7 (65.954333ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr: exit status 7 (32.202125ms)

                                                
                                                
-- stdout --
	multinode-934000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:58.616509    8865 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:58.616646    8865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:58.616649    8865 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:58.616652    8865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:58.616772    8865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:59:58.616903    8865 out.go:298] Setting JSON to false
	I0717 10:59:58.616913    8865 mustload.go:65] Loading cluster: multinode-934000
	I0717 10:59:58.616963    8865 notify.go:220] Checking for updates...
	I0717 10:59:58.617100    8865 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:58.617106    8865 status.go:255] checking status of multinode-934000 ...
	I0717 10:59:58.617309    8865 status.go:330] multinode-934000 host status = "Stopped" (err=<nil>)
	I0717 10:59:58.617314    8865 status.go:343] host is not running, skipping remaining checks
	I0717 10:59:58.617316    8865 status.go:257] multinode-934000 status: &{Name:multinode-934000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr": multinode-934000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-934000 status --alsologtostderr": multinode-934000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (29.516584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-934000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-934000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181296583s)

                                                
                                                
-- stdout --
	* [multinode-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-934000" primary control-plane node in "multinode-934000" cluster
	* Restarting existing qemu2 VM for "multinode-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-934000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:59:58.675654    8869 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:59:58.675767    8869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:58.675770    8869 out.go:304] Setting ErrFile to fd 2...
	I0717 10:59:58.675772    8869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:59:58.675914    8869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:59:58.676940    8869 out.go:298] Setting JSON to false
	I0717 10:59:58.692824    8869 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5366,"bootTime":1721233832,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:59:58.692893    8869 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:59:58.696905    8869 out.go:177] * [multinode-934000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:59:58.702671    8869 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:59:58.702729    8869 notify.go:220] Checking for updates...
	I0717 10:59:58.709741    8869 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:59:58.712743    8869 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:59:58.715769    8869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:59:58.718723    8869 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:59:58.721737    8869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:59:58.725041    8869 config.go:182] Loaded profile config "multinode-934000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:59:58.725303    8869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:59:58.729746    8869 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:59:58.736689    8869 start.go:297] selected driver: qemu2
	I0717 10:59:58.736696    8869 start.go:901] validating driver "qemu2" against &{Name:multinode-934000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:multinode-934000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:59:58.736757    8869 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:59:58.738943    8869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 10:59:58.738998    8869 cni.go:84] Creating CNI manager for ""
	I0717 10:59:58.739003    8869 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 10:59:58.739048    8869 start.go:340] cluster config:
	{Name:multinode-934000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-934000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:59:58.742405    8869 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:59:58.749736    8869 out.go:177] * Starting "multinode-934000" primary control-plane node in "multinode-934000" cluster
	I0717 10:59:58.753743    8869 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:59:58.753760    8869 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:59:58.753769    8869 cache.go:56] Caching tarball of preloaded images
	I0717 10:59:58.753825    8869 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 10:59:58.753831    8869 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 10:59:58.753886    8869 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/multinode-934000/config.json ...
	I0717 10:59:58.754301    8869 start.go:360] acquireMachinesLock for multinode-934000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 10:59:58.754328    8869 start.go:364] duration metric: took 21.333µs to acquireMachinesLock for "multinode-934000"
	I0717 10:59:58.754336    8869 start.go:96] Skipping create...Using existing machine configuration
	I0717 10:59:58.754343    8869 fix.go:54] fixHost starting: 
	I0717 10:59:58.754453    8869 fix.go:112] recreateIfNeeded on multinode-934000: state=Stopped err=<nil>
	W0717 10:59:58.754461    8869 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 10:59:58.761761    8869 out.go:177] * Restarting existing qemu2 VM for "multinode-934000" ...
	I0717 10:59:58.769759    8869 qemu.go:418] Using hvf for hardware acceleration
	I0717 10:59:58.769822    8869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:25:d0:37:37:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 10:59:58.771815    8869 main.go:141] libmachine: STDOUT: 
	I0717 10:59:58.771834    8869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 10:59:58.771861    8869 fix.go:56] duration metric: took 17.518167ms for fixHost
	I0717 10:59:58.771866    8869 start.go:83] releasing machines lock for "multinode-934000", held for 17.53425ms
	W0717 10:59:58.771871    8869 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 10:59:58.771910    8869 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 10:59:58.771915    8869 start.go:729] Will try again in 5 seconds ...
	I0717 11:00:03.773932    8869 start.go:360] acquireMachinesLock for multinode-934000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:00:03.774374    8869 start.go:364] duration metric: took 332.333µs to acquireMachinesLock for "multinode-934000"
	I0717 11:00:03.774553    8869 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:00:03.774575    8869 fix.go:54] fixHost starting: 
	I0717 11:00:03.775359    8869 fix.go:112] recreateIfNeeded on multinode-934000: state=Stopped err=<nil>
	W0717 11:00:03.775385    8869 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:00:03.779894    8869 out.go:177] * Restarting existing qemu2 VM for "multinode-934000" ...
	I0717 11:00:03.783844    8869 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:00:03.784073    8869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:25:d0:37:37:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/multinode-934000/disk.qcow2
	I0717 11:00:03.793630    8869 main.go:141] libmachine: STDOUT: 
	I0717 11:00:03.793691    8869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:00:03.793782    8869 fix.go:56] duration metric: took 19.210458ms for fixHost
	I0717 11:00:03.793803    8869 start.go:83] releasing machines lock for "multinode-934000", held for 19.377958ms
	W0717 11:00:03.794008    8869 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-934000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:03.801841    8869 out.go:177] 
	W0717 11:00:03.805869    8869 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:00:03.805888    8869 out.go:239] * 
	* 
	W0717 11:00:03.808186    8869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:00:03.815874    8869 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-934000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (66.579375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-934000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-934000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-934000-m01 --driver=qemu2 : exit status 80 (9.832886333s)

                                                
                                                
-- stdout --
	* [multinode-934000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-934000-m01" primary control-plane node in "multinode-934000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-934000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-934000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-934000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-934000-m02 --driver=qemu2 : exit status 80 (9.909643584s)

                                                
                                                
-- stdout --
	* [multinode-934000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-934000-m02" primary control-plane node in "multinode-934000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-934000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-934000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-934000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-934000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-934000: exit status 83 (82.128334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-934000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-934000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-934000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-934000 -n multinode-934000: exit status 7 (30.462583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-934000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.97s)

                                                
                                    
x
+
TestPreload (9.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.792138959s)

                                                
                                                
-- stdout --
	* [test-preload-118000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-118000" primary control-plane node in "test-preload-118000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-118000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:00:23.999281    8949 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:00:23.999413    8949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:23.999416    8949 out.go:304] Setting ErrFile to fd 2...
	I0717 11:00:23.999419    8949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:00:23.999540    8949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:00:24.000610    8949 out.go:298] Setting JSON to false
	I0717 11:00:24.016493    8949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5392,"bootTime":1721233832,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:00:24.016564    8949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:00:24.022927    8949 out.go:177] * [test-preload-118000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:00:24.030072    8949 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:00:24.030115    8949 notify.go:220] Checking for updates...
	I0717 11:00:24.037064    8949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:00:24.040108    8949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:00:24.041450    8949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:00:24.044066    8949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:00:24.047100    8949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:00:24.050515    8949 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:00:24.050566    8949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:00:24.054986    8949 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:00:24.062092    8949 start.go:297] selected driver: qemu2
	I0717 11:00:24.062100    8949 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:00:24.062108    8949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:00:24.064408    8949 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:00:24.068059    8949 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:00:24.072132    8949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:00:24.072161    8949 cni.go:84] Creating CNI manager for ""
	I0717 11:00:24.072169    8949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:00:24.072176    8949 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:00:24.072209    8949 start.go:340] cluster config:
	{Name:test-preload-118000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:00:24.075883    8949 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.085097    8949 out.go:177] * Starting "test-preload-118000" primary control-plane node in "test-preload-118000" cluster
	I0717 11:00:24.088055    8949 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0717 11:00:24.088124    8949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/test-preload-118000/config.json ...
	I0717 11:00:24.088143    8949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/test-preload-118000/config.json: {Name:mk21e108d9eeb0b42d8ce40bddac6e70f0d49a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:00:24.088152    8949 cache.go:107] acquiring lock: {Name:mk4d464219fe0e8f33ec564b77389af3c92b2b3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.088153    8949 cache.go:107] acquiring lock: {Name:mkc7708d6905596ff88dc5ade1d31d0e07a883ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.088176    8949 cache.go:107] acquiring lock: {Name:mk84dbdb1b7a0b614636c146c01f1f2e8881ec33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.088140    8949 cache.go:107] acquiring lock: {Name:mk37ecf4b84c0a96dc795321c2d0379c9a5f9bf4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.088338    8949 cache.go:107] acquiring lock: {Name:mkea9a1f67c69f12f6555ea25c0f4ca4c0a78a60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.088344    8949 cache.go:107] acquiring lock: {Name:mkee7f50ab0f80abd35154e5b3e2239108d5657c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.088345    8949 cache.go:107] acquiring lock: {Name:mkca83d45fc33c1a1544bc65c142296437c5af4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.088375    8949 cache.go:107] acquiring lock: {Name:mkac9bb46f1d6043e150e8f10d7544e5c6bafcbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:00:24.088530    8949 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:00:24.088547    8949 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 11:00:24.088593    8949 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 11:00:24.088600    8949 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 11:00:24.088616    8949 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 11:00:24.088652    8949 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 11:00:24.088745    8949 start.go:360] acquireMachinesLock for test-preload-118000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:00:24.088791    8949 start.go:364] duration metric: took 40.125µs to acquireMachinesLock for "test-preload-118000"
	I0717 11:00:24.088818    8949 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:00:24.088804    8949 start.go:93] Provisioning new machine with config: &{Name:test-preload-118000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:00:24.088839    8949 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:00:24.088849    8949 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:00:24.096044    8949 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:00:24.098880    8949 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 11:00:24.101493    8949 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 11:00:24.102292    8949 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 11:00:24.102388    8949 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:00:24.102449    8949 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:00:24.102474    8949 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 11:00:24.102487    8949 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:00:24.102541    8949 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 11:00:24.114524    8949 start.go:159] libmachine.API.Create for "test-preload-118000" (driver="qemu2")
	I0717 11:00:24.114548    8949 client.go:168] LocalClient.Create starting
	I0717 11:00:24.114642    8949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:00:24.114675    8949 main.go:141] libmachine: Decoding PEM data...
	I0717 11:00:24.114703    8949 main.go:141] libmachine: Parsing certificate...
	I0717 11:00:24.114751    8949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:00:24.114777    8949 main.go:141] libmachine: Decoding PEM data...
	I0717 11:00:24.114785    8949 main.go:141] libmachine: Parsing certificate...
	I0717 11:00:24.115223    8949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:00:24.248548    8949 main.go:141] libmachine: Creating SSH key...
	I0717 11:00:24.312781    8949 main.go:141] libmachine: Creating Disk image...
	I0717 11:00:24.312795    8949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:00:24.312934    8949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2
	I0717 11:00:24.323006    8949 main.go:141] libmachine: STDOUT: 
	I0717 11:00:24.323030    8949 main.go:141] libmachine: STDERR: 
	I0717 11:00:24.323081    8949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2 +20000M
	I0717 11:00:24.331736    8949 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:00:24.331750    8949 main.go:141] libmachine: STDERR: 
	I0717 11:00:24.331764    8949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2
	I0717 11:00:24.331767    8949 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:00:24.331779    8949 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:00:24.331811    8949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:1f:3f:7c:08:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2
	I0717 11:00:24.333743    8949 main.go:141] libmachine: STDOUT: 
	I0717 11:00:24.333763    8949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:00:24.333781    8949 client.go:171] duration metric: took 219.235ms to LocalClient.Create
	I0717 11:00:24.484194    8949 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0717 11:00:24.488125    8949 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0717 11:00:24.518516    8949 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0717 11:00:24.568682    8949 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0717 11:00:24.606498    8949 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0717 11:00:24.606527    8949 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 11:00:24.609024    8949 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0717 11:00:24.637074    8949 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0717 11:00:24.734030    8949 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0717 11:00:24.734075    8949 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 645.939ms
	I0717 11:00:24.734121    8949 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0717 11:00:24.852215    8949 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 11:00:24.852299    8949 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 11:00:25.050386    8949 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 11:00:25.050440    8949 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 962.3205ms
	I0717 11:00:25.050478    8949 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 11:00:26.333966    8949 start.go:128] duration metric: took 2.245157834s to createHost
	I0717 11:00:26.334017    8949 start.go:83] releasing machines lock for "test-preload-118000", held for 2.245269958s
	W0717 11:00:26.334073    8949 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:26.346245    8949 out.go:177] * Deleting "test-preload-118000" in qemu2 ...
	W0717 11:00:26.372046    8949 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:26.372073    8949 start.go:729] Will try again in 5 seconds ...
	I0717 11:00:27.126510    8949 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0717 11:00:27.126576    8949 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.038323s
	I0717 11:00:27.126608    8949 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0717 11:00:27.239389    8949 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0717 11:00:27.239460    8949 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.151213708s
	I0717 11:00:27.239493    8949 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0717 11:00:28.576128    8949 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0717 11:00:28.576181    8949 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.487968625s
	I0717 11:00:28.576208    8949 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0717 11:00:29.173403    8949 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0717 11:00:29.173457    8949 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.085433042s
	I0717 11:00:29.173486    8949 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0717 11:00:30.257453    8949 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0717 11:00:30.257508    8949 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.169274333s
	I0717 11:00:30.257569    8949 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0717 11:00:31.372330    8949 start.go:360] acquireMachinesLock for test-preload-118000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:00:31.372790    8949 start.go:364] duration metric: took 373.666µs to acquireMachinesLock for "test-preload-118000"
	I0717 11:00:31.372910    8949 start.go:93] Provisioning new machine with config: &{Name:test-preload-118000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:00:31.373185    8949 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:00:31.384875    8949 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:00:31.436441    8949 start.go:159] libmachine.API.Create for "test-preload-118000" (driver="qemu2")
	I0717 11:00:31.436498    8949 client.go:168] LocalClient.Create starting
	I0717 11:00:31.436677    8949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:00:31.436739    8949 main.go:141] libmachine: Decoding PEM data...
	I0717 11:00:31.436759    8949 main.go:141] libmachine: Parsing certificate...
	I0717 11:00:31.436825    8949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:00:31.436871    8949 main.go:141] libmachine: Decoding PEM data...
	I0717 11:00:31.436888    8949 main.go:141] libmachine: Parsing certificate...
	I0717 11:00:31.437430    8949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:00:31.578543    8949 main.go:141] libmachine: Creating SSH key...
	I0717 11:00:31.696365    8949 main.go:141] libmachine: Creating Disk image...
	I0717 11:00:31.696371    8949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:00:31.696540    8949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2
	I0717 11:00:31.706212    8949 main.go:141] libmachine: STDOUT: 
	I0717 11:00:31.706234    8949 main.go:141] libmachine: STDERR: 
	I0717 11:00:31.706298    8949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2 +20000M
	I0717 11:00:31.714568    8949 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:00:31.714582    8949 main.go:141] libmachine: STDERR: 
	I0717 11:00:31.714603    8949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2
	I0717 11:00:31.714610    8949 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:00:31.714623    8949 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:00:31.714660    8949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:8d:a3:1a:90:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/test-preload-118000/disk.qcow2
	I0717 11:00:31.716467    8949 main.go:141] libmachine: STDOUT: 
	I0717 11:00:31.716490    8949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:00:31.716503    8949 client.go:171] duration metric: took 280.005875ms to LocalClient.Create
	I0717 11:00:33.717109    8949 start.go:128] duration metric: took 2.34393625s to createHost
	I0717 11:00:33.717155    8949 start.go:83] releasing machines lock for "test-preload-118000", held for 2.344397416s
	W0717 11:00:33.717433    8949 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:00:33.730946    8949 out.go:177] 
	W0717 11:00:33.733875    8949 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:00:33.733934    8949 out.go:239] * 
	* 
	W0717 11:00:33.736354    8949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:00:33.747842    8949 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-17 11:00:33.765554 -0700 PDT m=+690.254970751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-118000 -n test-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-118000 -n test-preload-118000: exit status 7 (67.844792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-118000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-118000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-118000
--- FAIL: TestPreload (9.94s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-474000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-474000 --memory=2048 --driver=qemu2 : exit status 80 (9.901458667s)

                                                
                                                
-- stdout --
	* [scheduled-stop-474000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-474000" primary control-plane node in "scheduled-stop-474000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-474000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-474000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-474000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-474000" primary control-plane node in "scheduled-stop-474000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-474000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-474000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-17 11:00:43.810528 -0700 PDT m=+700.300190376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-474000 -n scheduled-stop-474000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-474000 -n scheduled-stop-474000: exit status 7 (68.677791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-474000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-474000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-474000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (12.11s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2807475317 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2807475317 version: (1.0611765s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-227000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-227000 --memory=2600 --driver=qemu2 : exit status 80 (9.71433725s)

                                                
                                                
-- stdout --
	* [skaffold-227000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-227000" primary control-plane node in "skaffold-227000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-227000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-227000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-227000" primary control-plane node in "skaffold-227000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-227000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-17 11:00:55.921744 -0700 PDT m=+712.411702542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-227000 -n skaffold-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-227000 -n skaffold-227000: exit status 7 (63.7135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-227000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-227000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-227000
--- FAIL: TestSkaffold (12.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (588.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1109977151 start -p running-upgrade-462000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1109977151 start -p running-upgrade-462000 --memory=2200 --vm-driver=qemu2 : (51.05733425s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.211729583s)

                                                
                                                
-- stdout --
	* [running-upgrade-462000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-462000" primary control-plane node in "running-upgrade-462000" cluster
	* Updating the running qemu2 "running-upgrade-462000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:02:28.543977    9411 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:02:28.544105    9411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:02:28.544109    9411 out.go:304] Setting ErrFile to fd 2...
	I0717 11:02:28.544112    9411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:02:28.544228    9411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:02:28.545252    9411 out.go:298] Setting JSON to false
	I0717 11:02:28.561907    9411 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5516,"bootTime":1721233832,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:02:28.561976    9411 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:02:28.566918    9411 out.go:177] * [running-upgrade-462000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:02:28.573853    9411 notify.go:220] Checking for updates...
	I0717 11:02:28.577798    9411 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:02:28.581690    9411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:02:28.584810    9411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:02:28.587847    9411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:02:28.590718    9411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:02:28.593879    9411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:02:28.597036    9411 config.go:182] Loaded profile config "running-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:02:28.599806    9411 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 11:02:28.602845    9411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:02:28.606889    9411 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:02:28.613825    9411 start.go:297] selected driver: qemu2
	I0717 11:02:28.613832    9411 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51278 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:02:28.613901    9411 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:02:28.616175    9411 cni.go:84] Creating CNI manager for ""
	I0717 11:02:28.616191    9411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:02:28.616217    9411 start.go:340] cluster config:
	{Name:running-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51278 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:02:28.616270    9411 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:02:28.625776    9411 out.go:177] * Starting "running-upgrade-462000" primary control-plane node in "running-upgrade-462000" cluster
	I0717 11:02:28.629861    9411 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:02:28.629886    9411 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0717 11:02:28.629902    9411 cache.go:56] Caching tarball of preloaded images
	I0717 11:02:28.629973    9411 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:02:28.629978    9411 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0717 11:02:28.630030    9411 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/config.json ...
	I0717 11:02:28.630444    9411 start.go:360] acquireMachinesLock for running-upgrade-462000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:02:28.630471    9411 start.go:364] duration metric: took 21µs to acquireMachinesLock for "running-upgrade-462000"
	I0717 11:02:28.630478    9411 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:02:28.630482    9411 fix.go:54] fixHost starting: 
	I0717 11:02:28.631048    9411 fix.go:112] recreateIfNeeded on running-upgrade-462000: state=Running err=<nil>
	W0717 11:02:28.631055    9411 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:02:28.633816    9411 out.go:177] * Updating the running qemu2 "running-upgrade-462000" VM ...
	I0717 11:02:28.641807    9411 machine.go:94] provisionDockerMachine start ...
	I0717 11:02:28.641836    9411 main.go:141] libmachine: Using SSH client type: native
	I0717 11:02:28.641938    9411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c2e9b0] 0x102c31210 <nil>  [] 0s} localhost 51246 <nil> <nil>}
	I0717 11:02:28.641942    9411 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 11:02:28.700359    9411 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-462000
	
	I0717 11:02:28.700369    9411 buildroot.go:166] provisioning hostname "running-upgrade-462000"
	I0717 11:02:28.700407    9411 main.go:141] libmachine: Using SSH client type: native
	I0717 11:02:28.700518    9411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c2e9b0] 0x102c31210 <nil>  [] 0s} localhost 51246 <nil> <nil>}
	I0717 11:02:28.700523    9411 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-462000 && echo "running-upgrade-462000" | sudo tee /etc/hostname
	I0717 11:02:28.761936    9411 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-462000
	
	I0717 11:02:28.761989    9411 main.go:141] libmachine: Using SSH client type: native
	I0717 11:02:28.762103    9411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c2e9b0] 0x102c31210 <nil>  [] 0s} localhost 51246 <nil> <nil>}
	I0717 11:02:28.762111    9411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-462000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-462000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-462000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 11:02:28.823411    9411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 11:02:28.823422    9411 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-6848/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-6848/.minikube}
	I0717 11:02:28.823429    9411 buildroot.go:174] setting up certificates
	I0717 11:02:28.823433    9411 provision.go:84] configureAuth start
	I0717 11:02:28.823437    9411 provision.go:143] copyHostCerts
	I0717 11:02:28.823501    9411 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem, removing ...
	I0717 11:02:28.823518    9411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem
	I0717 11:02:28.823652    9411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem (1082 bytes)
	I0717 11:02:28.823854    9411 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem, removing ...
	I0717 11:02:28.823858    9411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem
	I0717 11:02:28.823900    9411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem (1123 bytes)
	I0717 11:02:28.824008    9411 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem, removing ...
	I0717 11:02:28.824011    9411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem
	I0717 11:02:28.824050    9411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem (1679 bytes)
	I0717 11:02:28.824158    9411 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-462000 san=[127.0.0.1 localhost minikube running-upgrade-462000]
	I0717 11:02:29.011236    9411 provision.go:177] copyRemoteCerts
	I0717 11:02:29.011278    9411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 11:02:29.011288    9411 sshutil.go:53] new ssh client: &{IP:localhost Port:51246 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/running-upgrade-462000/id_rsa Username:docker}
	I0717 11:02:29.043201    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 11:02:29.050419    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 11:02:29.057370    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 11:02:29.063943    9411 provision.go:87] duration metric: took 240.506792ms to configureAuth
	I0717 11:02:29.063954    9411 buildroot.go:189] setting minikube options for container-runtime
	I0717 11:02:29.064067    9411 config.go:182] Loaded profile config "running-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:02:29.064110    9411 main.go:141] libmachine: Using SSH client type: native
	I0717 11:02:29.064206    9411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c2e9b0] 0x102c31210 <nil>  [] 0s} localhost 51246 <nil> <nil>}
	I0717 11:02:29.064211    9411 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 11:02:29.124309    9411 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 11:02:29.124319    9411 buildroot.go:70] root file system type: tmpfs
	I0717 11:02:29.124371    9411 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 11:02:29.124417    9411 main.go:141] libmachine: Using SSH client type: native
	I0717 11:02:29.124542    9411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c2e9b0] 0x102c31210 <nil>  [] 0s} localhost 51246 <nil> <nil>}
	I0717 11:02:29.124575    9411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 11:02:29.188447    9411 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 11:02:29.188509    9411 main.go:141] libmachine: Using SSH client type: native
	I0717 11:02:29.188635    9411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c2e9b0] 0x102c31210 <nil>  [] 0s} localhost 51246 <nil> <nil>}
	I0717 11:02:29.188643    9411 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 11:02:29.250015    9411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 11:02:29.250026    9411 machine.go:97] duration metric: took 608.228208ms to provisionDockerMachine
	I0717 11:02:29.250032    9411 start.go:293] postStartSetup for "running-upgrade-462000" (driver="qemu2")
	I0717 11:02:29.250038    9411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 11:02:29.250087    9411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 11:02:29.250096    9411 sshutil.go:53] new ssh client: &{IP:localhost Port:51246 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/running-upgrade-462000/id_rsa Username:docker}
	I0717 11:02:29.284298    9411 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 11:02:29.285597    9411 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 11:02:29.285603    9411 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-6848/.minikube/addons for local assets ...
	I0717 11:02:29.285666    9411 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-6848/.minikube/files for local assets ...
	I0717 11:02:29.285769    9411 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem -> 73362.pem in /etc/ssl/certs
	I0717 11:02:29.285863    9411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 11:02:29.288373    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem --> /etc/ssl/certs/73362.pem (1708 bytes)
	I0717 11:02:29.294859    9411 start.go:296] duration metric: took 44.823167ms for postStartSetup
	I0717 11:02:29.294871    9411 fix.go:56] duration metric: took 664.405375ms for fixHost
	I0717 11:02:29.294898    9411 main.go:141] libmachine: Using SSH client type: native
	I0717 11:02:29.294988    9411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c2e9b0] 0x102c31210 <nil>  [] 0s} localhost 51246 <nil> <nil>}
	I0717 11:02:29.294992    9411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 11:02:29.354385    9411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239349.750401347
	
	I0717 11:02:29.354395    9411 fix.go:216] guest clock: 1721239349.750401347
	I0717 11:02:29.354399    9411 fix.go:229] Guest: 2024-07-17 11:02:29.750401347 -0700 PDT Remote: 2024-07-17 11:02:29.294873 -0700 PDT m=+0.770001542 (delta=455.528347ms)
	I0717 11:02:29.354410    9411 fix.go:200] guest clock delta is within tolerance: 455.528347ms
	I0717 11:02:29.354413    9411 start.go:83] releasing machines lock for "running-upgrade-462000", held for 723.956ms
	I0717 11:02:29.354478    9411 ssh_runner.go:195] Run: cat /version.json
	I0717 11:02:29.354479    9411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 11:02:29.354488    9411 sshutil.go:53] new ssh client: &{IP:localhost Port:51246 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/running-upgrade-462000/id_rsa Username:docker}
	I0717 11:02:29.354498    9411 sshutil.go:53] new ssh client: &{IP:localhost Port:51246 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/running-upgrade-462000/id_rsa Username:docker}
	W0717 11:02:29.355133    9411 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51246: connect: connection refused
	I0717 11:02:29.355154    9411 retry.go:31] will retry after 259.215944ms: dial tcp [::1]:51246: connect: connection refused
	W0717 11:02:29.656673    9411 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 11:02:29.656798    9411 ssh_runner.go:195] Run: systemctl --version
	I0717 11:02:29.659377    9411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 11:02:29.662140    9411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 11:02:29.662177    9411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 11:02:29.666202    9411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 11:02:29.671692    9411 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 11:02:29.671700    9411 start.go:495] detecting cgroup driver to use...
	I0717 11:02:29.671824    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:02:29.678443    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0717 11:02:29.682085    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 11:02:29.685416    9411 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 11:02:29.685443    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 11:02:29.688677    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:02:29.691598    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 11:02:29.694469    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:02:29.697444    9411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 11:02:29.700253    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 11:02:29.703175    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 11:02:29.706459    9411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 11:02:29.709661    9411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 11:02:29.712142    9411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 11:02:29.714835    9411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:02:29.800611    9411 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 11:02:29.809179    9411 start.go:495] detecting cgroup driver to use...
	I0717 11:02:29.809243    9411 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 11:02:29.814345    9411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:02:29.819243    9411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 11:02:29.827509    9411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:02:29.832087    9411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:02:29.836683    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:02:29.841732    9411 ssh_runner.go:195] Run: which cri-dockerd
	I0717 11:02:29.843108    9411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 11:02:29.846501    9411 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 11:02:29.851596    9411 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 11:02:29.940720    9411 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 11:02:30.027731    9411 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 11:02:30.027786    9411 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 11:02:30.032969    9411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:02:30.129808    9411 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:02:32.654474    9411 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.524712666s)
	I0717 11:02:32.654542    9411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 11:02:32.659259    9411 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 11:02:32.666489    9411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:02:32.672283    9411 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 11:02:32.736416    9411 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 11:02:32.819063    9411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:02:32.904855    9411 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 11:02:32.910886    9411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:02:32.915947    9411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:02:32.978083    9411 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 11:02:33.019075    9411 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 11:02:33.019143    9411 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 11:02:33.021108    9411 start.go:563] Will wait 60s for crictl version
	I0717 11:02:33.021157    9411 ssh_runner.go:195] Run: which crictl
	I0717 11:02:33.022450    9411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 11:02:33.034752    9411 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0717 11:02:33.034821    9411 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:02:33.047242    9411 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:02:33.069350    9411 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0717 11:02:33.069418    9411 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0717 11:02:33.070846    9411 kubeadm.go:883] updating cluster {Name:running-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51278 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0717 11:02:33.070898    9411 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:02:33.070937    9411 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:02:33.081739    9411 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:02:33.081750    9411 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:02:33.081794    9411 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:02:33.084841    9411 ssh_runner.go:195] Run: which lz4
	I0717 11:02:33.086285    9411 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 11:02:33.087737    9411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 11:02:33.087746    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0717 11:02:33.966897    9411 docker.go:649] duration metric: took 880.668084ms to copy over tarball
	I0717 11:02:33.966958    9411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 11:02:35.065554    9411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.098610125s)
	I0717 11:02:35.065568    9411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 11:02:35.081884    9411 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:02:35.084967    9411 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0717 11:02:35.090180    9411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:02:35.153451    9411 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:02:36.352825    9411 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.199388s)
	I0717 11:02:36.352915    9411 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:02:36.367324    9411 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:02:36.367333    9411 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:02:36.367339    9411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 11:02:36.371149    9411 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:02:36.373145    9411 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:02:36.375191    9411 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:02:36.375624    9411 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:02:36.378086    9411 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 11:02:36.378317    9411 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:02:36.379847    9411 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:02:36.379967    9411 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:02:36.381394    9411 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 11:02:36.381511    9411 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:02:36.382987    9411 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:02:36.383386    9411 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:02:36.384470    9411 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:02:36.384998    9411 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:02:36.387023    9411 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:02:36.387156    9411 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:02:36.721369    9411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:02:36.731480    9411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0717 11:02:36.731507    9411 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:02:36.731559    9411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:02:36.741957    9411 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0717 11:02:36.755917    9411 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0717 11:02:36.756045    9411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:02:36.765941    9411 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0717 11:02:36.765961    9411 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:02:36.766005    9411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:02:36.776463    9411 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 11:02:36.776575    9411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:02:36.778292    9411 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0717 11:02:36.778306    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0717 11:02:36.779896    9411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0717 11:02:36.799292    9411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:02:36.810250    9411 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0717 11:02:36.810275    9411 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0717 11:02:36.810338    9411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0717 11:02:36.828705    9411 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:02:36.828717    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0717 11:02:36.831573    9411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:02:36.836332    9411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0717 11:02:36.836357    9411 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:02:36.836416    9411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:02:36.864580    9411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:02:36.866892    9411 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0717 11:02:36.867006    9411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0717 11:02:36.893650    9411 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0717 11:02:36.893687    9411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0717 11:02:36.893700    9411 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0717 11:02:36.893710    9411 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:02:36.893750    9411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0717 11:02:36.893754    9411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:02:36.893758    9411 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0717 11:02:36.893759    9411 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:02:36.893771    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0717 11:02:36.893788    9411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:02:36.908816    9411 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0717 11:02:36.908836    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0717 11:02:36.913755    9411 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0717 11:02:36.916907    9411 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0717 11:02:36.930999    9411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0717 11:02:36.943339    9411 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0717 11:02:36.945348    9411 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0717 11:02:36.945366    9411 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:02:36.945418    9411 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0717 11:02:36.955203    9411 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0717 11:02:37.112390    9411 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 11:02:37.112620    9411 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:02:37.138095    9411 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 11:02:37.138133    9411 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:02:37.138218    9411 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:02:38.411796    9411 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.273575375s)
	I0717 11:02:38.411856    9411 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 11:02:38.412137    9411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:02:38.416752    9411 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 11:02:38.416781    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0717 11:02:38.473179    9411 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:02:38.473191    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0717 11:02:38.703853    9411 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 11:02:38.703886    9411 cache_images.go:92] duration metric: took 2.336598417s to LoadCachedImages
	W0717 11:02:38.703932    9411 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0717 11:02:38.703938    9411 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0717 11:02:38.704003    9411 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-462000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 11:02:38.704068    9411 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 11:02:38.717336    9411 cni.go:84] Creating CNI manager for ""
	I0717 11:02:38.717350    9411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:02:38.717355    9411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 11:02:38.717363    9411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-462000 NodeName:running-upgrade-462000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 11:02:38.717427    9411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-462000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 11:02:38.717491    9411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0717 11:02:38.720320    9411 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 11:02:38.720343    9411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 11:02:38.722927    9411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 11:02:38.727964    9411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 11:02:38.732782    9411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0717 11:02:38.738044    9411 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0717 11:02:38.739228    9411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:02:38.826476    9411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:02:38.831817    9411 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000 for IP: 10.0.2.15
	I0717 11:02:38.831823    9411 certs.go:194] generating shared ca certs ...
	I0717 11:02:38.831831    9411 certs.go:226] acquiring lock for ca certs: {Name:mk50b621e3b03c5626e0e338e372bd26b7b413d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:02:38.832073    9411 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.key
	I0717 11:02:38.832115    9411 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.key
	I0717 11:02:38.832121    9411 certs.go:256] generating profile certs ...
	I0717 11:02:38.832189    9411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/client.key
	I0717 11:02:38.832200    9411 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.key.dfac6b1c
	I0717 11:02:38.832211    9411 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.crt.dfac6b1c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0717 11:02:38.876610    9411 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.crt.dfac6b1c ...
	I0717 11:02:38.876616    9411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.crt.dfac6b1c: {Name:mk60b4e6ad7f438e2711b2f2199a78242c7e4975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:02:38.876827    9411 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.key.dfac6b1c ...
	I0717 11:02:38.876832    9411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.key.dfac6b1c: {Name:mk68ee57112b65b0681c72fea6cd3312feb2cbdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:02:38.876967    9411 certs.go:381] copying /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.crt.dfac6b1c -> /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.crt
	I0717 11:02:38.877151    9411 certs.go:385] copying /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.key.dfac6b1c -> /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.key
	I0717 11:02:38.877301    9411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/proxy-client.key
	I0717 11:02:38.877433    9411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336.pem (1338 bytes)
	W0717 11:02:38.877459    9411 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336_empty.pem, impossibly tiny 0 bytes
	I0717 11:02:38.877464    9411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 11:02:38.877483    9411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem (1082 bytes)
	I0717 11:02:38.877501    9411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem (1123 bytes)
	I0717 11:02:38.877518    9411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem (1679 bytes)
	I0717 11:02:38.877564    9411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem (1708 bytes)
	I0717 11:02:38.877901    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 11:02:38.886784    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 11:02:38.894663    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 11:02:38.902151    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 11:02:38.909629    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 11:02:38.916520    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 11:02:38.923142    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 11:02:38.930548    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 11:02:38.938250    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem --> /usr/share/ca-certificates/73362.pem (1708 bytes)
	I0717 11:02:38.945164    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 11:02:38.951778    9411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336.pem --> /usr/share/ca-certificates/7336.pem (1338 bytes)
	I0717 11:02:38.958754    9411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 11:02:38.963700    9411 ssh_runner.go:195] Run: openssl version
	I0717 11:02:38.965397    9411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 11:02:38.968351    9411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:02:38.969848    9411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:02:38.969866    9411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:02:38.971630    9411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 11:02:38.974947    9411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7336.pem && ln -fs /usr/share/ca-certificates/7336.pem /etc/ssl/certs/7336.pem"
	I0717 11:02:38.978313    9411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7336.pem
	I0717 11:02:38.979822    9411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:49 /usr/share/ca-certificates/7336.pem
	I0717 11:02:38.979841    9411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7336.pem
	I0717 11:02:38.981586    9411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7336.pem /etc/ssl/certs/51391683.0"
	I0717 11:02:38.984557    9411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73362.pem && ln -fs /usr/share/ca-certificates/73362.pem /etc/ssl/certs/73362.pem"
	I0717 11:02:38.987565    9411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73362.pem
	I0717 11:02:38.989110    9411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:49 /usr/share/ca-certificates/73362.pem
	I0717 11:02:38.989127    9411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73362.pem
	I0717 11:02:38.990862    9411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73362.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 11:02:38.994046    9411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 11:02:38.995611    9411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 11:02:38.997407    9411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 11:02:38.999109    9411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 11:02:39.000986    9411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 11:02:39.002800    9411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 11:02:39.004637    9411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 11:02:39.006303    9411 kubeadm.go:392] StartCluster: {Name:running-upgrade-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51278 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:02:39.006375    9411 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:02:39.017322    9411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 11:02:39.020507    9411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 11:02:39.020512    9411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 11:02:39.020537    9411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 11:02:39.023138    9411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:02:39.023174    9411 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-462000" does not appear in /Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:02:39.023189    9411 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-6848/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-462000" cluster setting kubeconfig missing "running-upgrade-462000" context setting]
	I0717 11:02:39.023360    9411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/kubeconfig: {Name:mk327624617af42cf4cf2f31d3ffb3402af5684d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:02:39.024286    9411 kapi.go:59] client config for running-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fc3730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:02:39.025166    9411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 11:02:39.027814    9411 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-462000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0717 11:02:39.027819    9411 kubeadm.go:1160] stopping kube-system containers ...
	I0717 11:02:39.027856    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:02:39.038569    9411 docker.go:483] Stopping containers: [43b5a9c8862d a9cec17320c8 a5e3d748eee0 389d45a29a81 3ac0094052c5 68bd446affdc ce2bc22a7fee 4cce71ba6784 4dcb4cd5c6c7 480d72487256 9681113404fc 384b1eb566ff 4de2de54648c d7596f74ee93]
	I0717 11:02:39.038632    9411 ssh_runner.go:195] Run: docker stop 43b5a9c8862d a9cec17320c8 a5e3d748eee0 389d45a29a81 3ac0094052c5 68bd446affdc ce2bc22a7fee 4cce71ba6784 4dcb4cd5c6c7 480d72487256 9681113404fc 384b1eb566ff 4de2de54648c d7596f74ee93
	I0717 11:02:39.049508    9411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 11:02:39.141249    9411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:02:39.144660    9411 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 18:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 17 18:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 17 18:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 17 18:02 /etc/kubernetes/scheduler.conf
	
	I0717 11:02:39.144683    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/admin.conf
	I0717 11:02:39.147255    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:02:39.147279    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:02:39.150292    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/kubelet.conf
	I0717 11:02:39.153658    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:02:39.153682    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:02:39.156560    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/controller-manager.conf
	I0717 11:02:39.159138    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:02:39.159156    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:02:39.162435    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/scheduler.conf
	I0717 11:02:39.165047    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:02:39.165073    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:02:39.167630    9411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:02:39.170688    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:02:39.210144    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:02:39.525077    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:02:39.833651    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:02:39.879389    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:02:39.905539    9411 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:02:39.905630    9411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:02:40.408011    9411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:02:40.907656    9411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:02:40.912331    9411 api_server.go:72] duration metric: took 1.006817709s to wait for apiserver process to appear ...
	I0717 11:02:40.912341    9411 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:02:40.912350    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:02:45.914479    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:02:45.914560    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:02:50.915076    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:02:50.915120    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:02:55.968889    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:02:55.968978    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:00.969949    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:00.970027    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:05.971874    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:05.971956    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:10.972888    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:10.972965    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:15.975231    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:15.975317    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:20.978006    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:20.978092    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:25.980740    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:25.980823    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:30.982972    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:30.983088    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:35.985686    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:35.985780    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:40.988272    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:40.988423    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:03:41.009127    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:03:41.009199    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:03:41.020491    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:03:41.020558    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:03:41.031931    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:03:41.032006    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:03:41.043227    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:03:41.043313    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:03:41.058153    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:03:41.058232    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:03:41.069632    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:03:41.069702    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:03:41.080215    9411 logs.go:276] 0 containers: []
	W0717 11:03:41.080230    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:03:41.080281    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:03:41.091162    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:03:41.091181    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:03:41.091186    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:03:41.111993    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:03:41.112004    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:03:41.126948    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:03:41.126959    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:03:41.138733    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:03:41.138742    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:03:41.150362    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:03:41.150373    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:03:41.175591    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:03:41.175601    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:03:41.194630    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:03:41.194639    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:03:41.209276    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:03:41.209287    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:03:41.221690    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:03:41.221699    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:03:41.234184    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:03:41.234194    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:03:41.270885    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:03:41.270894    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:03:41.275546    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:03:41.275552    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:03:41.349068    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:03:41.349080    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:03:41.363445    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:03:41.363454    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:03:41.376763    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:03:41.376774    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:03:41.398027    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:03:41.398040    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:03:41.413344    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:03:41.413357    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:03:43.927752    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:48.930593    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:48.931037    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:03:48.971734    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:03:48.971854    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:03:48.994264    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:03:48.994357    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:03:49.009479    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:03:49.009559    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:03:49.022362    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:03:49.022433    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:03:49.034875    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:03:49.034952    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:03:49.047210    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:03:49.047275    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:03:49.063191    9411 logs.go:276] 0 containers: []
	W0717 11:03:49.063203    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:03:49.063271    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:03:49.073803    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:03:49.073822    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:03:49.073827    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:03:49.100007    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:03:49.100015    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:03:49.137892    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:03:49.137912    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:03:49.142648    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:03:49.142657    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:03:49.161374    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:03:49.161388    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:03:49.175427    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:03:49.175437    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:03:49.187768    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:03:49.187783    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:03:49.199223    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:03:49.199233    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:03:49.210694    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:03:49.210705    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:03:49.222525    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:03:49.222536    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:03:49.236066    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:03:49.236075    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:03:49.253738    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:03:49.253751    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:03:49.266431    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:03:49.266444    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:03:49.306344    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:03:49.306358    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:03:49.321022    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:03:49.321033    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:03:49.339488    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:03:49.339499    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:03:49.351102    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:03:49.351115    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:03:51.866667    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:03:56.869519    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:03:56.869950    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:03:56.911524    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:03:56.911655    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:03:56.931551    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:03:56.931639    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:03:56.949330    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:03:56.949404    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:03:56.961047    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:03:56.961130    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:03:56.971851    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:03:56.971926    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:03:56.984603    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:03:56.984670    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:03:56.995245    9411 logs.go:276] 0 containers: []
	W0717 11:03:56.995259    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:03:56.995317    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:03:57.005971    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:03:57.005988    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:03:57.005993    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:03:57.040846    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:03:57.040853    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:03:57.052087    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:03:57.052098    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:03:57.056513    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:03:57.056520    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:03:57.070396    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:03:57.070408    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:03:57.084180    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:03:57.084191    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:03:57.096039    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:03:57.096049    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:03:57.113917    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:03:57.113927    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:03:57.132525    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:03:57.132534    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:03:57.145307    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:03:57.145318    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:03:57.180275    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:03:57.180288    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:03:57.200544    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:03:57.200554    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:03:57.214391    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:03:57.214401    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:03:57.233211    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:03:57.233234    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:03:57.245501    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:03:57.245513    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:03:57.257495    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:03:57.257506    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:03:57.270113    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:03:57.270121    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:03:59.799081    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:04:04.801795    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:04:04.802294    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:04:04.850183    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:04:04.850300    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:04:04.869444    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:04:04.869529    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:04:04.888861    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:04:04.888933    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:04:04.900286    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:04:04.900359    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:04:04.913328    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:04:04.913395    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:04:04.924420    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:04:04.924493    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:04:04.935108    9411 logs.go:276] 0 containers: []
	W0717 11:04:04.935123    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:04:04.935173    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:04:04.945228    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:04:04.945243    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:04:04.945248    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:04:04.957823    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:04:04.957836    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:04:04.992768    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:04:04.992779    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:04:05.004851    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:04:05.004860    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:04:05.018926    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:04:05.018935    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:04:05.033646    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:04:05.033656    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:04:05.038351    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:04:05.038358    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:04:05.049589    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:04:05.049599    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:04:05.086254    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:04:05.086266    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:04:05.107897    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:04:05.107907    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:04:05.132833    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:04:05.132842    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:04:05.144045    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:04:05.144057    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:04:05.156388    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:04:05.156399    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:04:05.168112    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:04:05.168123    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:04:05.182224    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:04:05.182236    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:04:05.206460    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:04:05.206472    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:04:05.221488    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:04:05.221500    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:04:07.751979    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:04:12.754823    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:04:12.755258    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:04:12.793346    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:04:12.793480    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:04:12.817849    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:04:12.817952    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:04:12.832424    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:04:12.832491    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:04:12.844172    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:04:12.844244    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:04:12.857544    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:04:12.857616    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:04:12.868418    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:04:12.868485    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:04:12.878268    9411 logs.go:276] 0 containers: []
	W0717 11:04:12.878279    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:04:12.878334    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:04:12.888473    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:04:12.888491    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:04:12.888496    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:04:12.899895    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:04:12.899908    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:04:12.917362    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:04:12.917373    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:04:12.932531    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:04:12.932543    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:04:12.949670    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:04:12.949679    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:04:12.984515    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:04:12.984527    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:04:13.018741    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:04:13.018752    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:04:13.037822    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:04:13.037834    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:04:13.061678    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:04:13.061688    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:04:13.078841    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:04:13.078852    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:04:13.083565    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:04:13.083572    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:04:13.097159    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:04:13.097170    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:04:13.111744    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:04:13.111755    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:04:13.123339    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:04:13.123352    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:04:13.137550    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:04:13.137560    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:04:13.148810    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:04:13.148820    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:04:13.174489    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:04:13.174499    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:04:15.689701    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:04:20.692564    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:04:20.693012    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:04:20.734232    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:04:20.734367    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:04:20.757174    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:04:20.757289    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:04:20.773264    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:04:20.773343    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:04:20.785838    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:04:20.785909    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:04:20.796888    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:04:20.796957    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:04:20.807770    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:04:20.807837    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:04:20.818364    9411 logs.go:276] 0 containers: []
	W0717 11:04:20.818376    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:04:20.818433    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:04:20.829365    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:04:20.829382    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:04:20.829387    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:04:20.840548    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:04:20.840558    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:04:20.875353    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:04:20.875363    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:04:20.889194    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:04:20.889207    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:04:20.915786    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:04:20.915799    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:04:20.927288    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:04:20.927306    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:04:20.932035    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:04:20.932049    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:04:20.945536    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:04:20.945549    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:04:20.955994    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:04:20.956005    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:04:20.982341    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:04:20.982352    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:04:21.017934    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:04:21.017948    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:04:21.036068    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:04:21.036077    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:04:21.050448    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:04:21.050459    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:04:21.062721    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:04:21.062734    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:04:21.081399    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:04:21.081411    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:04:21.093386    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:04:21.093401    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:04:21.107160    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:04:21.107172    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:04:23.620524    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:04:28.623184    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:04:28.623614    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:04:28.661058    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:04:28.661190    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:04:28.682117    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:04:28.682209    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:04:28.697513    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:04:28.697587    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:04:28.709228    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:04:28.709288    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:04:28.719654    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:04:28.719716    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:04:28.730354    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:04:28.730423    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:04:28.740406    9411 logs.go:276] 0 containers: []
	W0717 11:04:28.740418    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:04:28.740477    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:04:28.752550    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:04:28.752567    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:04:28.752572    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:04:28.756768    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:04:28.756776    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:04:28.792730    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:04:28.792744    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:04:28.806810    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:04:28.806822    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:04:28.825375    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:04:28.825387    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:04:28.843523    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:04:28.843535    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:04:28.858111    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:04:28.858123    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:04:28.869252    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:04:28.869263    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:04:28.880892    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:04:28.880903    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:04:28.893217    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:04:28.893229    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:04:28.907264    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:04:28.907277    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:04:28.918702    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:04:28.918714    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:04:28.930276    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:04:28.930287    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:04:28.956487    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:04:28.956495    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:04:28.991234    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:04:28.991244    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:04:29.012683    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:04:29.012696    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:04:29.030222    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:04:29.030236    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:04:31.545491    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:04:36.547740    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:04:36.547892    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:04:36.565932    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:04:36.565986    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:04:36.580597    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:04:36.580666    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:04:36.592783    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:04:36.592838    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:04:36.604459    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:04:36.604512    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:04:36.622004    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:04:36.622055    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:04:36.632486    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:04:36.632543    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:04:36.644212    9411 logs.go:276] 0 containers: []
	W0717 11:04:36.644226    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:04:36.644276    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:04:36.655386    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:04:36.655403    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:04:36.655408    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:04:36.671778    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:04:36.671791    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:04:36.686053    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:04:36.686065    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:04:36.710634    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:04:36.710642    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:04:36.722325    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:04:36.722334    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:04:36.745231    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:04:36.745241    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:04:36.764043    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:04:36.764051    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:04:36.782247    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:04:36.782259    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:04:36.798025    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:04:36.798044    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:04:36.814361    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:04:36.814380    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:04:36.830756    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:04:36.830772    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:04:36.848090    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:04:36.848110    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:04:36.861519    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:04:36.861531    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:04:36.873940    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:04:36.873953    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:04:36.878934    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:04:36.878950    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:04:36.892683    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:04:36.892696    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:04:36.930957    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:04:36.930970    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:04:39.468549    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:04:44.469497    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:04:44.469636    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:04:44.491769    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:04:44.491865    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:04:44.507272    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:04:44.507347    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:04:44.520151    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:04:44.520219    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:04:44.531605    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:04:44.531665    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:04:44.542100    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:04:44.542166    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:04:44.553151    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:04:44.553211    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:04:44.563185    9411 logs.go:276] 0 containers: []
	W0717 11:04:44.563195    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:04:44.563245    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:04:44.573664    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:04:44.573680    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:04:44.573685    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:04:44.610268    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:04:44.610277    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:04:44.644025    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:04:44.644038    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:04:44.662608    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:04:44.662619    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:04:44.674343    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:04:44.674357    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:04:44.678598    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:04:44.678606    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:04:44.690878    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:04:44.690888    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:04:44.706072    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:04:44.706084    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:04:44.731194    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:04:44.731201    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:04:44.765944    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:04:44.765955    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:04:44.780361    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:04:44.780371    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:04:44.795050    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:04:44.795059    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:04:44.806219    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:04:44.806234    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:04:44.820070    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:04:44.820082    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:04:44.837170    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:04:44.837180    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:04:44.847992    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:04:44.848005    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:04:44.865177    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:04:44.865189    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:04:47.378282    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:04:52.380844    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:04:52.381195    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:04:52.410988    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:04:52.411115    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:04:52.430402    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:04:52.430485    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:04:52.445377    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:04:52.445445    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:04:52.456906    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:04:52.456966    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:04:52.467761    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:04:52.467833    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:04:52.478453    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:04:52.478511    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:04:52.488884    9411 logs.go:276] 0 containers: []
	W0717 11:04:52.488899    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:04:52.488957    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:04:52.499901    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:04:52.499918    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:04:52.499924    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:04:52.511915    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:04:52.511928    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:04:52.536272    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:04:52.536282    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:04:52.540558    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:04:52.540566    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:04:52.563046    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:04:52.563054    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:04:52.577947    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:04:52.577959    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:04:52.589817    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:04:52.589828    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:04:52.600977    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:04:52.600989    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:04:52.613353    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:04:52.613364    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:04:52.627733    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:04:52.627745    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:04:52.643269    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:04:52.643280    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:04:52.654963    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:04:52.654974    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:04:52.672334    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:04:52.672346    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:04:52.686800    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:04:52.686811    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:04:52.704119    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:04:52.704132    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:04:52.741410    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:04:52.741420    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:04:52.781049    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:04:52.781065    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:04:55.299356    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:00.299909    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:00.300193    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:00.329120    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:00.329242    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:00.345819    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:00.345893    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:00.362672    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:00.362738    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:00.373816    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:00.373889    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:00.383779    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:00.383849    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:00.394281    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:00.394343    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:00.404219    9411 logs.go:276] 0 containers: []
	W0717 11:05:00.404230    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:00.404283    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:00.415109    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:00.415128    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:00.415134    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:00.432747    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:00.432758    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:00.447045    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:00.447055    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:00.459033    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:00.459046    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:00.463694    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:00.463702    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:00.478193    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:00.478205    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:00.493618    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:00.493631    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:00.505120    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:00.505132    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:00.516258    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:00.516273    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:00.532663    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:00.532673    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:00.570509    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:00.570519    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:00.584632    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:00.584645    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:00.598247    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:00.598260    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:00.615537    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:00.615548    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:00.626821    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:00.626855    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:00.650756    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:00.650766    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:00.689036    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:00.689051    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:03.210469    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:08.212761    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:08.212940    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:08.228448    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:08.228524    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:08.241514    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:08.241589    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:08.252793    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:08.252860    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:08.269380    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:08.269451    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:08.279999    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:08.280068    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:08.291305    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:08.291371    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:08.305239    9411 logs.go:276] 0 containers: []
	W0717 11:05:08.305251    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:08.305312    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:08.317105    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:08.317127    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:08.317133    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:08.356514    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:08.356531    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:08.371880    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:08.371894    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:08.391684    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:08.391701    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:08.403939    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:08.403951    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:08.422193    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:08.422204    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:08.426447    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:08.426457    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:08.438355    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:08.438368    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:08.458447    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:08.458459    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:08.471258    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:08.471269    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:08.484896    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:08.484909    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:08.509553    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:08.509560    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:08.548811    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:08.548825    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:08.562818    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:08.562829    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:08.577452    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:08.577465    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:08.595643    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:08.595653    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:08.606615    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:08.606629    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:11.121320    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:16.122754    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:16.122883    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:16.140195    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:16.140271    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:16.153453    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:16.153540    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:16.170927    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:16.171012    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:16.182414    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:16.182498    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:16.193044    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:16.193109    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:16.203511    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:16.203573    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:16.214441    9411 logs.go:276] 0 containers: []
	W0717 11:05:16.214452    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:16.214508    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:16.224987    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:16.225005    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:16.225011    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:16.250684    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:16.250700    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:16.286339    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:16.286356    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:16.301473    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:16.301488    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:16.314345    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:16.314358    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:16.334519    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:16.334530    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:16.348995    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:16.349006    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:16.366322    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:16.366332    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:16.378293    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:16.378304    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:16.394301    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:16.394313    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:16.431772    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:16.431785    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:16.436664    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:16.436673    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:16.453614    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:16.453625    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:16.465920    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:16.465936    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:16.478102    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:16.478114    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:16.496804    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:16.496819    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:16.509627    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:16.509639    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:19.026466    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:24.028617    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:24.028753    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:24.039691    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:24.039775    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:24.050889    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:24.050976    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:24.061736    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:24.061802    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:24.072639    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:24.072709    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:24.083209    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:24.083271    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:24.094808    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:24.094881    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:24.104914    9411 logs.go:276] 0 containers: []
	W0717 11:05:24.104926    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:24.104988    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:24.116216    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:24.116235    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:24.116240    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:24.121035    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:24.121042    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:24.135446    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:24.135455    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:24.161694    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:24.161707    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:24.176144    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:24.176158    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:24.195442    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:24.195455    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:24.208439    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:24.208450    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:24.222799    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:24.222810    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:24.234204    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:24.234216    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:24.271363    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:24.271371    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:24.307904    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:24.307915    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:24.329956    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:24.329971    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:24.347508    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:24.347520    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:24.364656    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:24.364667    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:24.376686    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:24.376697    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:24.389470    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:24.389480    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:24.401896    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:24.401908    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:26.916981    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:31.919588    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:31.919709    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:31.931304    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:31.931390    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:31.943136    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:31.943225    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:31.954291    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:31.954354    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:31.968720    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:31.968794    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:31.983281    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:31.983355    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:31.995479    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:31.995554    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:32.007729    9411 logs.go:276] 0 containers: []
	W0717 11:05:32.007742    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:32.007805    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:32.019413    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:32.019432    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:32.019440    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:32.032085    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:32.032098    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:32.057392    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:32.057409    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:32.069971    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:32.069987    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:32.107385    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:32.107413    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:32.145047    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:32.145064    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:32.160545    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:32.160558    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:32.173826    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:32.173840    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:32.186843    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:32.186855    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:32.201798    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:32.201813    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:32.219730    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:32.219742    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:32.232336    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:32.232351    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:32.257051    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:32.257067    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:32.277094    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:32.277110    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:32.292629    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:32.292642    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:32.313181    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:32.313198    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:32.317634    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:32.317641    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:34.833039    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:39.834587    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:39.835003    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:39.879910    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:39.880030    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:39.899854    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:39.899950    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:39.914559    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:39.914638    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:39.926799    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:39.926873    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:39.937872    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:39.937937    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:39.948689    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:39.948747    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:39.959301    9411 logs.go:276] 0 containers: []
	W0717 11:05:39.959322    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:39.959381    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:39.982387    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:39.982405    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:39.982411    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:39.994173    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:39.994182    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:40.015917    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:40.015929    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:40.028398    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:40.028410    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:40.066414    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:40.066434    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:40.120273    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:40.120287    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:40.135731    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:40.135743    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:40.147281    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:40.147292    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:40.161681    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:40.161694    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:40.184729    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:40.184738    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:40.196263    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:40.196275    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:40.200891    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:40.200900    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:40.214874    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:40.214887    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:40.226293    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:40.226306    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:40.243828    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:40.243840    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:40.256625    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:40.256637    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:40.292860    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:40.292874    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:42.816248    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:47.817783    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:47.817895    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:47.828933    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:47.829001    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:47.839050    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:47.839119    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:47.858304    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:47.858373    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:47.868976    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:47.869041    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:47.880224    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:47.880296    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:47.891163    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:47.891233    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:47.902355    9411 logs.go:276] 0 containers: []
	W0717 11:05:47.902372    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:47.902454    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:47.913004    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:47.913024    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:47.913030    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:47.923953    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:47.923966    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:47.938034    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:47.938047    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:47.954104    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:47.954115    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:47.965859    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:47.965872    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:47.984667    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:47.984678    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:48.008995    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:48.009001    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:48.020511    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:48.020525    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:48.034395    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:48.034406    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:48.051322    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:48.051330    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:48.062617    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:48.062628    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:48.073778    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:48.073788    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:48.114496    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:48.114507    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:48.131457    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:48.131469    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:48.166481    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:48.166490    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:48.173276    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:48.173285    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:48.195359    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:48.195371    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:50.712144    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:55.714895    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:55.715065    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:55.734277    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:55.734355    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:55.748063    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:55.748139    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:55.759168    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:55.759231    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:55.770784    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:55.770845    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:55.782940    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:55.783005    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:55.794281    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:55.794340    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:55.805046    9411 logs.go:276] 0 containers: []
	W0717 11:05:55.805061    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:55.805115    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:55.815555    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:55.815573    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:55.815579    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:55.827943    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:55.827955    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:55.863954    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:55.863964    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:55.878176    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:55.878187    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:55.892381    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:55.892392    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:55.916989    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:55.916998    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:55.935528    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:55.935539    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:55.947713    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:55.947725    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:55.960067    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:55.960078    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:55.971374    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:55.971387    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:55.984244    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:55.984258    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:56.005194    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:56.005205    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:56.024611    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:56.024622    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:56.062180    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:56.062188    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:56.066461    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:56.066470    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:56.080064    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:56.080078    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:56.099368    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:56.099381    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:58.613395    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:03.615603    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:03.616002    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:03.657380    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:03.657517    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:03.679840    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:03.679952    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:03.695319    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:03.695398    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:03.707995    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:03.708064    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:03.719250    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:03.719325    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:03.729990    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:03.730058    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:03.741655    9411 logs.go:276] 0 containers: []
	W0717 11:06:03.741666    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:03.741723    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:03.752188    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:03.752206    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:03.752212    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:03.766850    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:03.766863    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:03.780816    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:03.780831    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:03.792400    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:03.792414    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:03.817388    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:03.817401    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:03.836373    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:03.836386    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:03.850185    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:03.850203    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:03.868054    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:03.868066    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:03.879784    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:03.879796    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:03.891473    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:03.891485    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:03.903404    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:03.903418    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:03.918921    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:03.918932    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:03.923645    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:03.923652    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:03.960706    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:03.960720    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:03.975406    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:03.975419    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:03.988816    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:03.988828    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:04.011977    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:04.011987    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:06.550539    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:11.553374    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:11.554006    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:11.591354    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:11.591494    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:11.613184    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:11.613295    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:11.627846    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:11.627917    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:11.640187    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:11.640251    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:11.650907    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:11.650979    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:11.661889    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:11.661950    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:11.676500    9411 logs.go:276] 0 containers: []
	W0717 11:06:11.676513    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:11.676561    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:11.686955    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:11.686973    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:11.686978    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:11.701091    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:11.701102    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:11.719419    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:11.719434    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:11.734115    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:11.734133    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:11.756968    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:11.756995    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:11.795571    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:11.795582    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:11.818389    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:11.818403    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:11.832866    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:11.832878    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:11.848612    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:11.848626    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:11.860823    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:11.860835    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:11.865233    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:11.865249    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:11.877646    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:11.877658    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:11.890305    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:11.890321    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:11.901907    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:11.901922    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:11.939421    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:11.939435    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:11.954223    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:11.954236    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:11.971917    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:11.971929    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:14.488111    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:19.490233    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:19.490320    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:19.501923    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:19.501999    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:19.512910    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:19.512978    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:19.523754    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:19.523819    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:19.534600    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:19.534671    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:19.546165    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:19.546229    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:19.557244    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:19.557308    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:19.567715    9411 logs.go:276] 0 containers: []
	W0717 11:06:19.567727    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:19.567783    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:19.579032    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:19.579049    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:19.579055    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:19.591021    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:19.591033    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:19.605740    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:19.605750    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:19.617417    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:19.617429    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:19.642200    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:19.642212    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:19.684467    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:19.684479    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:19.706941    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:19.706955    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:19.723747    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:19.723766    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:19.748241    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:19.748255    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:19.764364    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:19.764377    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:19.782710    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:19.782722    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:19.822847    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:19.822866    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:19.828045    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:19.828053    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:19.843123    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:19.843136    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:19.861694    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:19.861708    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:19.876167    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:19.876181    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:19.888350    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:19.888362    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:22.402661    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:27.404991    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:27.405203    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:27.426532    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:27.426623    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:27.449783    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:27.449850    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:27.461282    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:27.461348    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:27.471384    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:27.471454    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:27.489471    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:27.489531    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:27.499950    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:27.500020    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:27.510730    9411 logs.go:276] 0 containers: []
	W0717 11:06:27.510744    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:27.510796    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:27.521699    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:27.521718    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:27.521724    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:27.558956    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:27.558970    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:27.578678    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:27.578692    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:27.600396    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:27.600407    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:27.611958    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:27.611970    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:27.623411    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:27.623424    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:27.638435    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:27.638449    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:27.649404    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:27.649415    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:27.684859    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:27.684868    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:27.698498    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:27.698507    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:27.715837    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:27.715852    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:27.727829    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:27.727840    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:27.732842    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:27.732852    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:27.744143    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:27.744158    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:27.761234    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:27.761246    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:27.775515    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:27.775528    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:27.800170    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:27.800183    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:30.316640    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:35.317334    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:35.317500    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:35.335736    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:35.335821    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:35.352907    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:35.352977    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:35.363644    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:35.363709    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:35.374555    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:35.374624    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:35.385208    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:35.385271    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:35.395600    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:35.395672    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:35.406614    9411 logs.go:276] 0 containers: []
	W0717 11:06:35.406625    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:35.406681    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:35.416941    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:35.416964    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:35.416970    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:35.434766    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:35.434776    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:35.457268    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:35.457276    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:35.491203    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:35.491215    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:35.503042    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:35.503055    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:35.514704    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:35.514714    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:35.532154    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:35.532167    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:35.546987    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:35.546999    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:35.584202    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:35.584210    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:35.597857    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:35.597868    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:35.609705    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:35.609720    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:35.625788    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:35.625798    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:35.640825    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:35.640841    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:35.645909    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:35.645920    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:35.660922    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:35.660935    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:35.681590    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:35.681603    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:35.692929    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:35.692940    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:38.208346    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:43.210716    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:43.210863    9411 kubeadm.go:597] duration metric: took 4m4.13909275s to restartPrimaryControlPlane
	W0717 11:06:43.210984    9411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 11:06:43.211033    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 11:06:44.210116    9411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 11:06:44.215038    9411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:06:44.217823    9411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:06:44.220639    9411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:06:44.220645    9411 kubeadm.go:157] found existing configuration files:
	
	I0717 11:06:44.220663    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/admin.conf
	I0717 11:06:44.223617    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:06:44.223643    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:06:44.226545    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/kubelet.conf
	I0717 11:06:44.229364    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:06:44.229388    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:06:44.232614    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/controller-manager.conf
	I0717 11:06:44.235598    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:06:44.235624    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:06:44.238234    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/scheduler.conf
	I0717 11:06:44.240896    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:06:44.240920    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:06:44.243795    9411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 11:06:44.262790    9411 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0717 11:06:44.262831    9411 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 11:06:44.312912    9411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 11:06:44.312989    9411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 11:06:44.313042    9411 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 11:06:44.361349    9411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 11:06:44.369290    9411 out.go:204]   - Generating certificates and keys ...
	I0717 11:06:44.369328    9411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 11:06:44.369358    9411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 11:06:44.369431    9411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 11:06:44.369513    9411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 11:06:44.369550    9411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 11:06:44.369588    9411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 11:06:44.369632    9411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 11:06:44.369663    9411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 11:06:44.369703    9411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 11:06:44.370271    9411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 11:06:44.370290    9411 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 11:06:44.370315    9411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 11:06:44.408965    9411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 11:06:44.533568    9411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 11:06:44.603927    9411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 11:06:44.719135    9411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 11:06:44.748291    9411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 11:06:44.749442    9411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 11:06:44.749465    9411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 11:06:44.835270    9411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 11:06:44.839428    9411 out.go:204]   - Booting up control plane ...
	I0717 11:06:44.839473    9411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 11:06:44.839536    9411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 11:06:44.839579    9411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 11:06:44.839627    9411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 11:06:44.839723    9411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 11:06:49.340283    9411 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.506141 seconds
	I0717 11:06:49.340366    9411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 11:06:49.344922    9411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 11:06:49.853562    9411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 11:06:49.853782    9411 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-462000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 11:06:50.357907    9411 kubeadm.go:310] [bootstrap-token] Using token: 8b84m6.t6w3sse1ymni0yha
	I0717 11:06:50.361137    9411 out.go:204]   - Configuring RBAC rules ...
	I0717 11:06:50.361209    9411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 11:06:50.361257    9411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 11:06:50.363261    9411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 11:06:50.364604    9411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 11:06:50.365464    9411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 11:06:50.366668    9411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 11:06:50.369461    9411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 11:06:50.569857    9411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 11:06:50.763650    9411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 11:06:50.764388    9411 kubeadm.go:310] 
	I0717 11:06:50.764424    9411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 11:06:50.764426    9411 kubeadm.go:310] 
	I0717 11:06:50.764464    9411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 11:06:50.764472    9411 kubeadm.go:310] 
	I0717 11:06:50.764489    9411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 11:06:50.764519    9411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 11:06:50.764544    9411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 11:06:50.764547    9411 kubeadm.go:310] 
	I0717 11:06:50.764586    9411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 11:06:50.764589    9411 kubeadm.go:310] 
	I0717 11:06:50.764615    9411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 11:06:50.764618    9411 kubeadm.go:310] 
	I0717 11:06:50.764646    9411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 11:06:50.764689    9411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 11:06:50.764730    9411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 11:06:50.764734    9411 kubeadm.go:310] 
	I0717 11:06:50.764782    9411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 11:06:50.764823    9411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 11:06:50.764827    9411 kubeadm.go:310] 
	I0717 11:06:50.764869    9411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8b84m6.t6w3sse1ymni0yha \
	I0717 11:06:50.764920    9411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:41cf485c9ce3ed322472481a1cde965b121021497c454b4b9fd17940d4869b14 \
	I0717 11:06:50.764931    9411 kubeadm.go:310] 	--control-plane 
	I0717 11:06:50.764935    9411 kubeadm.go:310] 
	I0717 11:06:50.764981    9411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 11:06:50.764984    9411 kubeadm.go:310] 
	I0717 11:06:50.765040    9411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8b84m6.t6w3sse1ymni0yha \
	I0717 11:06:50.765103    9411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:41cf485c9ce3ed322472481a1cde965b121021497c454b4b9fd17940d4869b14 
	I0717 11:06:50.765157    9411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 11:06:50.765229    9411 cni.go:84] Creating CNI manager for ""
	I0717 11:06:50.765238    9411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:06:50.773801    9411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 11:06:50.777830    9411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 11:06:50.780936    9411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 11:06:50.786743    9411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 11:06:50.786823    9411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-462000 minikube.k8s.io/updated_at=2024_07_17T11_06_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=running-upgrade-462000 minikube.k8s.io/primary=true
	I0717 11:06:50.786824    9411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 11:06:50.794152    9411 ops.go:34] apiserver oom_adj: -16
	I0717 11:06:50.829750    9411 kubeadm.go:1113] duration metric: took 42.967ms to wait for elevateKubeSystemPrivileges
	I0717 11:06:50.833031    9411 kubeadm.go:394] duration metric: took 4m11.775539375s to StartCluster
	I0717 11:06:50.833047    9411 settings.go:142] acquiring lock: {Name:mk52ddc32cf249ba715452a288aa286713554b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:50.833205    9411 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:06:50.833640    9411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/kubeconfig: {Name:mk327624617af42cf4cf2f31d3ffb3402af5684d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:50.833855    9411 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:06:50.833960    9411 config.go:182] Loaded profile config "running-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:06:50.833907    9411 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 11:06:50.834002    9411 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-462000"
	I0717 11:06:50.834011    9411 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-462000"
	I0717 11:06:50.834013    9411 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-462000"
	W0717 11:06:50.834017    9411 addons.go:243] addon storage-provisioner should already be in state true
	I0717 11:06:50.834023    9411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-462000"
	I0717 11:06:50.834027    9411 host.go:66] Checking if "running-upgrade-462000" exists ...
	I0717 11:06:50.835063    9411 kapi.go:59] client config for running-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fc3730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:06:50.835188    9411 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-462000"
	W0717 11:06:50.835193    9411 addons.go:243] addon default-storageclass should already be in state true
	I0717 11:06:50.835200    9411 host.go:66] Checking if "running-upgrade-462000" exists ...
	I0717 11:06:50.837828    9411 out.go:177] * Verifying Kubernetes components...
	I0717 11:06:50.838132    9411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 11:06:50.841977    9411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 11:06:50.841985    9411 sshutil.go:53] new ssh client: &{IP:localhost Port:51246 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/running-upgrade-462000/id_rsa Username:docker}
	I0717 11:06:50.845664    9411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:50.849787    9411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:50.853807    9411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:06:50.853813    9411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 11:06:50.853819    9411 sshutil.go:53] new ssh client: &{IP:localhost Port:51246 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/running-upgrade-462000/id_rsa Username:docker}
	I0717 11:06:50.935525    9411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:06:50.940433    9411 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:06:50.940476    9411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:50.944524    9411 api_server.go:72] duration metric: took 110.657542ms to wait for apiserver process to appear ...
	I0717 11:06:50.944535    9411 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:06:50.944542    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:50.980660    9411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 11:06:50.996933    9411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:06:55.944891    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:55.944931    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:00.946577    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:00.946620    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:05.947008    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:05.947031    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:10.947363    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:10.947422    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:15.947911    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:15.947953    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:20.948565    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:20.948586    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:07:21.353362    9411 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0717 11:07:21.356684    9411 out.go:177] * Enabled addons: storage-provisioner
	I0717 11:07:21.364581    9411 addons.go:510] duration metric: took 30.530900291s for enable addons: enabled=[storage-provisioner]
	I0717 11:07:25.949759    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:25.949807    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:30.951021    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:30.951074    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:35.952975    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:35.953016    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:40.954952    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:40.955008    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:45.956524    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:45.956567    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:50.958814    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:50.958896    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:50.970129    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:07:50.970204    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:50.989580    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:07:50.989680    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:51.022404    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:07:51.022477    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:51.034319    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:07:51.034389    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:51.045806    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:07:51.045896    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:51.057644    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:07:51.057726    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:51.068840    9411 logs.go:276] 0 containers: []
	W0717 11:07:51.068853    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:51.068909    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:51.079660    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:07:51.079675    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:51.079681    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:51.121925    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:07:51.121947    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:07:51.138147    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:07:51.138163    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:07:51.156904    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:07:51.156916    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:07:51.173475    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:07:51.173490    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:07:51.193862    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:51.193873    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:51.219701    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:07:51.219711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:51.231864    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:51.231873    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:51.237005    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:51.237016    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:51.321950    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:07:51.321964    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:07:51.334725    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:07:51.334740    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:07:51.346360    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:07:51.346370    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:07:51.358808    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:07:51.358818    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:07:53.872002    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:58.872307    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:58.872396    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:58.884076    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:07:58.884113    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:58.895550    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:07:58.895584    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:58.908799    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:07:58.908860    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:58.920689    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:07:58.920757    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:58.932384    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:07:58.932457    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:58.946254    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:07:58.946320    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:58.957399    9411 logs.go:276] 0 containers: []
	W0717 11:07:58.957411    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:58.957471    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:58.968407    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:07:58.968423    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:58.968428    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:59.007691    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:59.007711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:59.012665    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:59.012675    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:59.080673    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:07:59.080686    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:07:59.093780    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:07:59.093790    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:07:59.106178    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:07:59.106189    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:07:59.121602    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:07:59.121615    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:07:59.134625    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:59.134636    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:59.159773    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:07:59.159783    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:59.171346    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:07:59.171358    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:07:59.187666    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:07:59.187674    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:07:59.202845    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:07:59.202856    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:07:59.221366    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:07:59.221375    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:01.740381    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:06.741441    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:06.741565    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:06.754142    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:06.754222    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:06.769257    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:06.769324    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:06.782463    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:06.782540    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:06.793978    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:06.794052    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:06.805112    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:06.805182    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:06.816678    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:06.816744    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:06.828176    9411 logs.go:276] 0 containers: []
	W0717 11:08:06.828188    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:06.828245    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:06.839697    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:06.839713    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:06.839718    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:06.858028    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:06.858039    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:06.879162    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:06.879175    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:06.892057    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:06.892069    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:06.907199    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:06.907210    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:06.919486    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:06.919498    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:06.935128    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:06.935140    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:06.948127    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:06.948138    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:06.961477    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:06.961495    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:06.987136    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:06.987152    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:07.027694    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:07.027711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:07.033206    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:07.033218    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:07.071959    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:07.071971    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:09.592539    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:14.595230    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:14.595404    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:14.608433    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:14.608507    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:14.621335    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:14.621399    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:14.632027    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:14.632098    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:14.642065    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:14.642133    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:14.653534    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:14.653602    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:14.666956    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:14.667021    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:14.686763    9411 logs.go:276] 0 containers: []
	W0717 11:08:14.686772    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:14.686798    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:14.699409    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:14.699425    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:14.699430    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:14.712078    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:14.712095    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:14.724810    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:14.724822    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:14.744957    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:14.744966    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:14.758330    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:14.758343    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:14.782958    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:14.782974    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:14.821917    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:14.821927    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:14.861040    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:14.861052    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:14.877801    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:14.877812    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:14.890787    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:14.890799    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:14.903874    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:14.903886    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:14.908957    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:14.908970    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:14.924750    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:14.924760    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:17.442965    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:22.445227    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:22.445439    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:22.464551    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:22.464644    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:22.479017    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:22.479097    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:22.491048    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:22.491125    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:22.501855    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:22.501922    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:22.512451    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:22.512519    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:22.523471    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:22.523542    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:22.534195    9411 logs.go:276] 0 containers: []
	W0717 11:08:22.534206    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:22.534266    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:22.545148    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:22.545164    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:22.545169    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:22.549767    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:22.549774    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:22.584019    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:22.584031    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:22.598811    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:22.598823    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:22.613531    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:22.613545    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:22.630065    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:22.630074    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:22.648601    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:22.648615    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:22.661004    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:22.661016    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:22.701984    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:22.702003    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:22.719677    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:22.719689    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:22.736281    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:22.736294    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:22.750163    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:22.750175    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:22.762839    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:22.762850    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:25.291009    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:30.293327    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:30.293526    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:30.314600    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:30.314699    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:30.334159    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:30.334222    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:30.346078    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:30.346169    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:30.356938    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:30.357010    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:30.367162    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:30.367238    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:30.377741    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:30.377809    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:30.388168    9411 logs.go:276] 0 containers: []
	W0717 11:08:30.388181    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:30.388240    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:30.398659    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:30.398675    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:30.398681    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:30.410686    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:30.410698    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:30.430286    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:30.430300    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:30.454352    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:30.454360    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:30.490580    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:30.490590    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:30.507211    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:30.507221    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:30.518730    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:30.518741    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:30.536058    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:30.536069    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:30.548216    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:30.548229    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:30.561360    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:30.561372    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:30.601661    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:30.601683    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:30.606810    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:30.606821    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:30.624926    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:30.624939    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:33.145478    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:38.146870    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:38.147121    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:38.175398    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:38.175513    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:38.192587    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:38.192672    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:38.205603    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:38.205669    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:38.215894    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:38.215973    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:38.227066    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:38.227136    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:38.238470    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:38.238537    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:38.248994    9411 logs.go:276] 0 containers: []
	W0717 11:08:38.249005    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:38.249058    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:38.261272    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:38.261290    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:38.261295    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:38.265785    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:38.265792    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:38.279420    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:38.279431    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:38.292337    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:38.292350    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:38.307320    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:38.307336    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:38.329941    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:38.329952    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:38.367448    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:38.367456    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:38.381710    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:38.381724    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:38.393215    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:38.393225    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:38.405042    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:38.405053    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:38.423467    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:38.423478    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:38.436164    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:38.436175    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:38.462574    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:38.462593    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:41.003842    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:46.006144    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:46.006339    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:46.022604    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:46.022676    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:46.034397    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:46.034462    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:46.045251    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:46.045343    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:46.055794    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:46.055857    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:46.065997    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:46.066059    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:46.076723    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:46.076784    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:46.087391    9411 logs.go:276] 0 containers: []
	W0717 11:08:46.087402    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:46.087457    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:46.098036    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:46.098051    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:46.098056    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:46.112942    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:46.112952    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:46.124531    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:46.124543    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:46.142394    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:46.142408    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:46.157874    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:46.157887    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:46.169204    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:46.169216    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:46.180658    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:46.180669    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:46.218279    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:46.218288    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:46.222701    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:46.222711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:46.233829    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:46.233840    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:46.252170    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:46.252182    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:46.276056    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:46.276066    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:46.311964    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:46.311976    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:48.827475    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:53.828202    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:53.828440    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:53.844597    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:53.844680    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:53.857498    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:53.857572    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:53.868953    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:53.869014    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:53.879228    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:53.879291    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:53.889811    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:53.889871    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:53.900389    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:53.900458    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:53.911698    9411 logs.go:276] 0 containers: []
	W0717 11:08:53.911710    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:53.911766    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:53.922003    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:53.922017    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:53.922022    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:53.926407    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:53.926416    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:53.938304    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:53.938317    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:53.963506    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:53.963516    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:53.982136    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:53.982147    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:54.018671    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:54.018681    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:54.053718    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:54.053732    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:54.068368    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:54.068377    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:54.082063    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:54.082075    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:54.100967    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:54.100977    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:54.112986    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:54.112997    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:54.131611    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:54.131622    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:54.143511    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:54.143522    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:56.657409    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:01.659679    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:01.659893    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:01.680555    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:01.680649    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:01.695487    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:01.695560    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:01.708082    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:09:01.708147    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:01.718446    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:01.718519    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:01.729093    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:01.729162    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:01.739508    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:01.739576    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:01.750050    9411 logs.go:276] 0 containers: []
	W0717 11:09:01.750060    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:01.750116    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:01.760429    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:01.760448    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:01.760454    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:01.774772    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:01.774782    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:01.790409    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:01.790422    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:01.803390    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:01.803400    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:01.817899    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:01.817909    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:01.829807    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:01.829817    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:01.844162    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:01.844174    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:01.849061    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:01.849068    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:01.884294    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:01.884305    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:01.902552    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:01.902562    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:01.914924    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:01.914937    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:01.939900    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:01.939911    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:01.951980    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:01.951992    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:04.490560    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:09.492801    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:09.493090    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:09.513887    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:09.513990    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:09.528442    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:09.528526    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:09.544255    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:09.544324    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:09.554480    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:09.554538    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:09.564681    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:09.564752    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:09.580372    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:09.580448    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:09.590576    9411 logs.go:276] 0 containers: []
	W0717 11:09:09.590586    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:09.590635    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:09.601002    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:09.601019    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:09.601026    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:09.639838    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:09.639848    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:09.651458    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:09.651471    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:09.662612    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:09.662624    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:09.677303    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:09.677314    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:09.689624    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:09.689638    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:09.701028    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:09.701039    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:09.705884    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:09.705893    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:09.723158    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:09.723172    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:09.735001    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:09.735011    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:09.751157    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:09.751167    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:09.775271    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:09.775280    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:09.787264    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:09.787273    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:09.801090    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:09.801102    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:09.822802    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:09.822815    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:12.361696    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:17.364224    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:17.364391    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:17.376874    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:17.376945    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:17.387553    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:17.387626    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:17.398438    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:17.398511    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:17.409558    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:17.409633    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:17.427261    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:17.427330    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:17.438140    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:17.438203    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:17.448826    9411 logs.go:276] 0 containers: []
	W0717 11:09:17.448839    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:17.448899    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:17.459563    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:17.459582    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:17.459586    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:17.474527    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:17.474537    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:17.489296    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:17.489312    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:17.524568    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:17.524579    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:17.562294    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:17.562303    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:17.573701    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:17.573711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:17.598439    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:17.598455    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:17.602935    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:17.602941    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:17.621566    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:17.621576    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:17.633816    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:17.633828    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:17.646091    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:17.646102    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:17.658233    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:17.658242    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:17.669541    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:17.669551    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:17.705663    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:17.705675    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:17.717330    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:17.717340    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:20.231245    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:25.233550    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:25.233767    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:25.255511    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:25.255618    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:25.270480    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:25.270558    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:25.283717    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:25.283781    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:25.295196    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:25.295271    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:25.305387    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:25.305455    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:25.316046    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:25.316110    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:25.326503    9411 logs.go:276] 0 containers: []
	W0717 11:09:25.326520    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:25.326577    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:25.337824    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:25.337842    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:25.337847    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:25.355995    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:25.356007    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:25.360836    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:25.360845    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:25.375034    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:25.375046    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:25.386606    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:25.386618    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:25.398581    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:25.398593    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:25.423841    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:25.423849    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:25.463033    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:25.463041    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:25.477417    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:25.477427    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:25.488674    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:25.488684    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:25.503591    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:25.503599    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:25.515888    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:25.515899    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:25.556951    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:25.556964    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:25.568646    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:25.568660    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:25.580205    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:25.580223    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:28.098196    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:33.100559    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:33.100733    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:33.116596    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:33.116677    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:33.128233    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:33.128294    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:33.139340    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:33.139413    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:33.150135    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:33.150204    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:33.160896    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:33.160969    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:33.171505    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:33.171581    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:33.185956    9411 logs.go:276] 0 containers: []
	W0717 11:09:33.185967    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:33.186039    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:33.197035    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:33.197055    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:33.197061    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:33.215981    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:33.215991    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:33.233019    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:33.233029    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:33.258513    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:33.258521    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:33.296937    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:33.296948    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:33.312148    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:33.312158    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:33.323471    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:33.323484    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:33.335131    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:33.335141    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:33.346124    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:33.346135    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:33.350679    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:33.350687    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:33.365680    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:33.365690    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:33.381054    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:33.381065    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:33.396989    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:33.397001    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:33.409355    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:33.409365    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:33.447025    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:33.447036    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:35.959532    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:40.961846    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:40.961977    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:40.975329    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:40.975407    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:40.987020    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:40.987088    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:40.999075    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:40.999147    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:41.009563    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:41.009632    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:41.020277    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:41.020339    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:41.039480    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:41.039551    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:41.049753    9411 logs.go:276] 0 containers: []
	W0717 11:09:41.049769    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:41.049824    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:41.062798    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:41.062816    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:41.062821    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:41.067505    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:41.067512    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:41.086803    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:41.086815    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:41.123549    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:41.123557    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:41.134764    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:41.134777    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:41.152177    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:41.152189    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:41.176276    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:41.176284    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:41.215553    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:41.215568    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:41.230174    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:41.230184    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:41.241510    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:41.241521    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:41.253602    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:41.253615    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:41.265156    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:41.265166    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:41.276751    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:41.276762    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:41.288268    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:41.288277    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:41.303073    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:41.303084    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:43.816879    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:48.819162    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:48.819324    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:48.832139    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:48.832219    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:48.850958    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:48.851022    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:48.861678    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:48.861747    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:48.872110    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:48.872169    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:48.882367    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:48.882439    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:48.892416    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:48.892474    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:48.902409    9411 logs.go:276] 0 containers: []
	W0717 11:09:48.902425    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:48.902479    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:48.912869    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:48.912887    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:48.912893    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:48.917459    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:48.917468    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:48.956914    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:48.956927    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:48.971327    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:48.971339    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:48.983078    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:48.983089    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:48.998585    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:48.998595    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:49.013544    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:49.013553    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:49.053937    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:49.053951    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:49.065975    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:49.065986    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:49.085690    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:49.085701    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:49.102395    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:49.102406    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:49.114266    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:49.114279    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:49.128717    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:49.128728    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:49.143949    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:49.143959    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:49.155970    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:49.155980    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:51.683558    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:56.685689    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:56.685841    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:56.704907    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:56.705010    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:56.721589    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:56.721657    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:56.733251    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:56.733327    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:56.743369    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:56.743441    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:56.753887    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:56.753951    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:56.763965    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:56.764033    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:56.774133    9411 logs.go:276] 0 containers: []
	W0717 11:09:56.774145    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:56.774204    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:56.784576    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:56.784594    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:56.784600    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:56.797143    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:56.797154    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:56.812992    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:56.813004    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:56.848526    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:56.848539    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:56.853196    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:56.853204    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:56.867588    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:56.867601    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:56.878915    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:56.878929    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:56.891559    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:56.891569    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:56.903399    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:56.903414    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:56.941999    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:56.942008    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:56.954381    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:56.954391    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:56.968773    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:56.968786    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:56.985712    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:56.985724    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:56.997788    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:56.997800    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:57.022278    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:57.022291    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:59.538219    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:04.540474    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:04.540640    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:04.555634    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:04.555718    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:04.567781    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:04.567844    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:04.579042    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:04.579103    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:04.589543    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:04.589613    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:04.599982    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:04.600046    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:04.610917    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:04.610981    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:04.621935    9411 logs.go:276] 0 containers: []
	W0717 11:10:04.621946    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:04.622005    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:04.632533    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:04.632555    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:04.632560    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:04.644234    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:04.644245    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:04.656576    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:04.656587    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:04.676375    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:04.676386    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:04.692389    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:04.692398    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:04.718586    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:04.718599    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:04.733817    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:04.733828    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:04.748816    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:04.748827    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:04.788845    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:04.788857    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:04.800993    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:04.801005    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:04.818627    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:04.818636    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:04.823081    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:04.823086    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:04.834391    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:04.834401    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:04.846390    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:04.846406    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:04.858282    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:04.858292    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:07.396749    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:12.399446    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:12.399965    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:12.435735    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:12.435873    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:12.456118    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:12.456207    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:12.471212    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:12.471294    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:12.483570    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:12.483638    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:12.494065    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:12.494135    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:12.504806    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:12.504874    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:12.517748    9411 logs.go:276] 0 containers: []
	W0717 11:10:12.517760    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:12.517812    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:12.528451    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:12.528471    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:12.528477    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:12.544628    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:12.544639    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:12.556484    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:12.556494    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:12.570751    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:12.570760    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:12.585015    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:12.585025    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:12.606271    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:12.606286    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:12.620056    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:12.620065    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:12.655866    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:12.655878    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:12.673410    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:12.673420    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:12.685568    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:12.685579    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:12.710249    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:12.710257    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:12.721742    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:12.721753    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:12.726239    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:12.726246    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:12.747116    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:12.747128    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:12.765860    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:12.765870    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:15.305221    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:20.307477    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:20.307636    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:20.320889    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:20.320967    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:20.331949    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:20.332014    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:20.342225    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:20.342294    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:20.352747    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:20.352816    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:20.364425    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:20.364486    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:20.374563    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:20.374626    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:20.384900    9411 logs.go:276] 0 containers: []
	W0717 11:10:20.384913    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:20.384965    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:20.395325    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:20.395343    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:20.395350    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:20.431439    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:20.431453    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:20.446648    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:20.446659    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:20.470697    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:20.470703    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:20.475003    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:20.475011    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:20.489080    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:20.489091    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:20.507300    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:20.507314    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:20.524630    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:20.524640    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:20.536004    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:20.536017    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:20.550850    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:20.550863    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:20.562483    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:20.562492    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:20.575138    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:20.575151    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:20.587082    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:20.587092    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:20.624204    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:20.624214    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:20.636202    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:20.636213    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:23.150533    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:28.152801    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:28.152899    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:28.164505    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:28.164580    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:28.176520    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:28.176572    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:28.190995    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:28.191065    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:28.202384    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:28.202446    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:28.213199    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:28.213272    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:28.223702    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:28.223769    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:28.234438    9411 logs.go:276] 0 containers: []
	W0717 11:10:28.234450    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:28.234508    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:28.246288    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:28.246304    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:28.246308    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:28.284780    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:28.284799    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:28.321202    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:28.321217    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:28.333822    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:28.333833    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:28.338654    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:28.338667    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:28.355618    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:28.355628    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:28.368226    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:28.368236    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:28.387206    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:28.387218    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:28.412869    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:28.412887    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:28.429479    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:28.429495    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:28.445436    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:28.445447    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:28.458165    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:28.458180    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:28.477436    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:28.477449    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:28.490043    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:28.490055    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:28.501697    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:28.501709    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:31.017681    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:36.019947    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:36.020111    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:36.037274    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:36.037347    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:36.052206    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:36.052303    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:36.074722    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:36.074787    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:36.085921    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:36.085987    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:36.096842    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:36.096915    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:36.107399    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:36.107466    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:36.117134    9411 logs.go:276] 0 containers: []
	W0717 11:10:36.117147    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:36.117199    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:36.127499    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:36.127521    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:36.127527    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:36.163873    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:36.163884    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:36.178428    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:36.178440    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:36.190741    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:36.190753    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:36.202899    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:36.202908    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:36.207385    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:36.207394    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:36.221506    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:36.221516    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:36.247093    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:36.247103    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:36.285558    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:36.285570    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:36.297747    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:36.297758    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:36.309851    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:36.309862    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:36.322209    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:36.322220    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:36.340962    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:36.340973    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:36.352783    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:36.352795    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:36.364738    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:36.364751    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:38.885511    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:43.887209    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:43.887306    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:43.898291    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:43.898376    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:43.910061    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:43.910119    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:43.921080    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:43.921139    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:43.939048    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:43.939105    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:43.949237    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:43.949301    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:43.964969    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:43.965039    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:43.975755    9411 logs.go:276] 0 containers: []
	W0717 11:10:43.975768    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:43.975833    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:43.988331    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:43.988350    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:43.988356    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:44.000735    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:44.000745    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:44.014609    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:44.014623    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:44.025844    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:44.025855    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:44.037747    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:44.037759    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:44.074995    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:44.075010    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:44.087024    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:44.087036    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:44.102718    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:44.102732    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:44.114811    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:44.114826    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:44.152088    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:44.152102    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:44.166260    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:44.166274    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:44.179526    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:44.179539    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:44.198043    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:44.198057    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:44.220817    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:44.220825    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:44.225734    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:44.225742    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:46.743740    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:51.745998    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:51.750576    9411 out.go:177] 
	W0717 11:10:51.754534    9411 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0717 11:10:51.754546    9411 out.go:239] * 
	* 
	W0717 11:10:51.755392    9411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:10:51.765452    9411 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-462000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-17 11:10:51.86535 -0700 PDT m=+1308.308311959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-462000 -n running-upgrade-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-462000 -n running-upgrade-462000: exit status 2 (15.60062225s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-462000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-060000          | force-systemd-flag-060000 | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-812000              | force-systemd-env-812000  | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-812000           | force-systemd-env-812000  | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT | 17 Jul 24 11:01 PDT |
	| start   | -p docker-flags-212000                | docker-flags-212000       | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-060000             | force-systemd-flag-060000 | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-060000          | force-systemd-flag-060000 | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT | 17 Jul 24 11:01 PDT |
	| start   | -p cert-expiration-095000             | cert-expiration-095000    | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-212000 ssh               | docker-flags-212000       | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-212000 ssh               | docker-flags-212000       | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-212000                | docker-flags-212000       | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT | 17 Jul 24 11:01 PDT |
	| start   | -p cert-options-634000                | cert-options-634000       | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-634000 ssh               | cert-options-634000       | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-634000 -- sudo        | cert-options-634000       | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-634000                | cert-options-634000       | jenkins | v1.33.1 | 17 Jul 24 11:01 PDT | 17 Jul 24 11:01 PDT |
	| start   | -p running-upgrade-462000             | minikube                  | jenkins | v1.26.0 | 17 Jul 24 11:01 PDT | 17 Jul 24 11:02 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-462000             | running-upgrade-462000    | jenkins | v1.33.1 | 17 Jul 24 11:02 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-095000             | cert-expiration-095000    | jenkins | v1.33.1 | 17 Jul 24 11:04 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-095000             | cert-expiration-095000    | jenkins | v1.33.1 | 17 Jul 24 11:04 PDT | 17 Jul 24 11:04 PDT |
	| start   | -p kubernetes-upgrade-212000          | kubernetes-upgrade-212000 | jenkins | v1.33.1 | 17 Jul 24 11:04 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-212000          | kubernetes-upgrade-212000 | jenkins | v1.33.1 | 17 Jul 24 11:04 PDT | 17 Jul 24 11:04 PDT |
	| start   | -p kubernetes-upgrade-212000          | kubernetes-upgrade-212000 | jenkins | v1.33.1 | 17 Jul 24 11:04 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-212000          | kubernetes-upgrade-212000 | jenkins | v1.33.1 | 17 Jul 24 11:04 PDT | 17 Jul 24 11:04 PDT |
	| start   | -p stopped-upgrade-018000             | minikube                  | jenkins | v1.26.0 | 17 Jul 24 11:04 PDT | 17 Jul 24 11:05 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-018000 stop           | minikube                  | jenkins | v1.26.0 | 17 Jul 24 11:05 PDT | 17 Jul 24 11:05 PDT |
	| start   | -p stopped-upgrade-018000             | stopped-upgrade-018000    | jenkins | v1.33.1 | 17 Jul 24 11:05 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 11:05:48
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 11:05:48.491603    9661 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:05:48.491756    9661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:48.491759    9661 out.go:304] Setting ErrFile to fd 2...
	I0717 11:05:48.491761    9661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:48.491897    9661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:05:48.492985    9661 out.go:298] Setting JSON to false
	I0717 11:05:48.510459    9661 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5716,"bootTime":1721233832,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:05:48.510523    9661 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:05:48.515255    9661 out.go:177] * [stopped-upgrade-018000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:05:48.522250    9661 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:05:48.522308    9661 notify.go:220] Checking for updates...
	I0717 11:05:48.529191    9661 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:05:48.532184    9661 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:05:48.535126    9661 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:05:48.538171    9661 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:05:48.541179    9661 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:05:48.542667    9661 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:05:48.546121    9661 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 11:05:48.549162    9661 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:05:48.553035    9661 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:05:48.560162    9661 start.go:297] selected driver: qemu2
	I0717 11:05:48.560167    9661 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51499 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:05:48.560210    9661 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:05:48.562919    9661 cni.go:84] Creating CNI manager for ""
	I0717 11:05:48.562934    9661 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:05:48.562962    9661 start.go:340] cluster config:
	{Name:stopped-upgrade-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51499 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:05:48.563011    9661 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:05:48.571143    9661 out.go:177] * Starting "stopped-upgrade-018000" primary control-plane node in "stopped-upgrade-018000" cluster
	I0717 11:05:48.575189    9661 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:05:48.575205    9661 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0717 11:05:48.575226    9661 cache.go:56] Caching tarball of preloaded images
	I0717 11:05:48.575294    9661 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:05:48.575299    9661 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0717 11:05:48.575355    9661 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/config.json ...
	I0717 11:05:48.575769    9661 start.go:360] acquireMachinesLock for stopped-upgrade-018000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:48.575802    9661 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "stopped-upgrade-018000"
	I0717 11:05:48.575809    9661 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:05:48.575814    9661 fix.go:54] fixHost starting: 
	I0717 11:05:48.575915    9661 fix.go:112] recreateIfNeeded on stopped-upgrade-018000: state=Stopped err=<nil>
	W0717 11:05:48.575922    9661 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:05:48.583196    9661 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-018000" ...
	I0717 11:05:47.817783    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:47.817895    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:47.828933    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:47.829001    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:47.839050    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:47.839119    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:47.858304    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:47.858373    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:47.868976    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:47.869041    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:47.880224    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:47.880296    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:47.891163    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:47.891233    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:47.902355    9411 logs.go:276] 0 containers: []
	W0717 11:05:47.902372    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:47.902454    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:47.913004    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:47.913024    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:47.913030    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:47.923953    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:47.923966    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:47.938034    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:47.938047    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:47.954104    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:47.954115    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:47.965859    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:47.965872    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:47.984667    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:47.984678    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:48.008995    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:48.009001    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:48.020511    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:48.020525    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:48.034395    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:48.034406    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:48.051322    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:48.051330    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:48.062617    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:48.062628    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:48.073778    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:48.073788    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:48.114496    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:48.114507    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:48.131457    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:48.131469    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:48.166481    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:48.166490    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:48.173276    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:48.173285    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:48.195359    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:48.195371    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:48.587153    9661 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:48.587219    9661 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51465-:22,hostfwd=tcp::51466-:2376,hostname=stopped-upgrade-018000 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/disk.qcow2
	I0717 11:05:48.635185    9661 main.go:141] libmachine: STDOUT: 
	I0717 11:05:48.635217    9661 main.go:141] libmachine: STDERR: 
	I0717 11:05:48.635223    9661 main.go:141] libmachine: Waiting for VM to start (ssh -p 51465 docker@127.0.0.1)...
	I0717 11:05:50.712144    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:05:55.714895    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:05:55.715065    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:05:55.734277    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:05:55.734355    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:05:55.748063    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:05:55.748139    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:05:55.759168    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:05:55.759231    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:05:55.770784    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:05:55.770845    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:05:55.782940    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:05:55.783005    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:05:55.794281    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:05:55.794340    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:05:55.805046    9411 logs.go:276] 0 containers: []
	W0717 11:05:55.805061    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:05:55.805115    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:05:55.815555    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:05:55.815573    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:05:55.815579    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:05:55.827943    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:05:55.827955    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:05:55.863954    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:05:55.863964    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:05:55.878176    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:05:55.878187    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:05:55.892381    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:05:55.892392    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:05:55.916989    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:05:55.916998    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:05:55.935528    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:05:55.935539    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:05:55.947713    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:05:55.947725    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:05:55.960067    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:05:55.960078    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:05:55.971374    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:05:55.971387    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:05:55.984244    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:05:55.984258    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:05:56.005194    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:05:56.005205    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:05:56.024611    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:05:56.024622    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:05:56.062180    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:05:56.062188    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:05:56.066461    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:05:56.066470    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:05:56.080064    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:05:56.080078    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:05:56.099368    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:05:56.099381    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:05:58.613395    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:03.615603    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:03.616002    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:03.657380    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:03.657517    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:03.679840    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:03.679952    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:03.695319    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:03.695398    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:03.707995    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:03.708064    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:03.719250    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:03.719325    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:03.729990    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:03.730058    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:03.741655    9411 logs.go:276] 0 containers: []
	W0717 11:06:03.741666    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:03.741723    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:03.752188    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:03.752206    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:03.752212    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:03.766850    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:03.766863    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:03.780816    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:03.780831    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:03.792400    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:03.792414    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:03.817388    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:03.817401    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:03.836373    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:03.836386    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:03.850185    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:03.850203    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:03.868054    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:03.868066    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:03.879784    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:03.879796    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:03.891473    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:03.891485    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:03.903404    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:03.903418    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:03.918921    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:03.918932    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:03.923645    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:03.923652    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:03.960706    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:03.960720    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:03.975406    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:03.975419    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:03.988816    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:03.988828    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:04.011977    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:04.011987    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:06.550539    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:09.192536    9661 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/config.json ...
	I0717 11:06:09.193366    9661 machine.go:94] provisionDockerMachine start ...
	I0717 11:06:09.193571    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.194165    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.194181    9661 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 11:06:09.289451    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 11:06:09.289486    9661 buildroot.go:166] provisioning hostname "stopped-upgrade-018000"
	I0717 11:06:09.289626    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.289883    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.289904    9661 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-018000 && echo "stopped-upgrade-018000" | sudo tee /etc/hostname
	I0717 11:06:09.370915    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-018000
	
	I0717 11:06:09.370996    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.371160    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.371173    9661 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-018000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-018000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-018000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 11:06:09.445833    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 11:06:09.445847    9661 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-6848/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-6848/.minikube}
	I0717 11:06:09.445856    9661 buildroot.go:174] setting up certificates
	I0717 11:06:09.445861    9661 provision.go:84] configureAuth start
	I0717 11:06:09.445865    9661 provision.go:143] copyHostCerts
	I0717 11:06:09.445958    9661 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem, removing ...
	I0717 11:06:09.445967    9661 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem
	I0717 11:06:09.446100    9661 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem (1082 bytes)
	I0717 11:06:09.446322    9661 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem, removing ...
	I0717 11:06:09.446327    9661 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem
	I0717 11:06:09.446961    9661 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem (1123 bytes)
	I0717 11:06:09.447097    9661 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem, removing ...
	I0717 11:06:09.447101    9661 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem
	I0717 11:06:09.447161    9661 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem (1679 bytes)
	I0717 11:06:09.447291    9661 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-018000 san=[127.0.0.1 localhost minikube stopped-upgrade-018000]
	I0717 11:06:09.525636    9661 provision.go:177] copyRemoteCerts
	I0717 11:06:09.525665    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 11:06:09.525671    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:06:09.562712    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 11:06:09.570203    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 11:06:09.577327    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 11:06:09.583961    9661 provision.go:87] duration metric: took 138.097208ms to configureAuth
	I0717 11:06:09.583978    9661 buildroot.go:189] setting minikube options for container-runtime
	I0717 11:06:09.584078    9661 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:06:09.584111    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.584207    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.584211    9661 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 11:06:09.651900    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 11:06:09.651907    9661 buildroot.go:70] root file system type: tmpfs
	I0717 11:06:09.651958    9661 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 11:06:09.652004    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.652118    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.652153    9661 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 11:06:09.724215    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 11:06:09.724280    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.724399    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.724412    9661 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 11:06:10.095888    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 11:06:10.095902    9661 machine.go:97] duration metric: took 902.53125ms to provisionDockerMachine
	I0717 11:06:10.095910    9661 start.go:293] postStartSetup for "stopped-upgrade-018000" (driver="qemu2")
	I0717 11:06:10.095917    9661 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 11:06:10.095969    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 11:06:10.095979    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:06:10.133902    9661 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 11:06:10.135121    9661 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 11:06:10.135132    9661 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-6848/.minikube/addons for local assets ...
	I0717 11:06:10.135222    9661 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-6848/.minikube/files for local assets ...
	I0717 11:06:10.135351    9661 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem -> 73362.pem in /etc/ssl/certs
	I0717 11:06:10.135479    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 11:06:10.138480    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem --> /etc/ssl/certs/73362.pem (1708 bytes)
	I0717 11:06:10.145845    9661 start.go:296] duration metric: took 49.931334ms for postStartSetup
	I0717 11:06:10.145859    9661 fix.go:56] duration metric: took 21.570194917s for fixHost
	I0717 11:06:10.145896    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:10.146000    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:10.146009    9661 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 11:06:10.211409    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239569.805862171
	
	I0717 11:06:10.211415    9661 fix.go:216] guest clock: 1721239569.805862171
	I0717 11:06:10.211419    9661 fix.go:229] Guest: 2024-07-17 11:06:09.805862171 -0700 PDT Remote: 2024-07-17 11:06:10.145861 -0700 PDT m=+21.677974917 (delta=-339.998829ms)
	I0717 11:06:10.211433    9661 fix.go:200] guest clock delta is within tolerance: -339.998829ms
	I0717 11:06:10.211436    9661 start.go:83] releasing machines lock for "stopped-upgrade-018000", held for 21.635780792s
	I0717 11:06:10.211498    9661 ssh_runner.go:195] Run: cat /version.json
	I0717 11:06:10.211507    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:06:10.211498    9661 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 11:06:10.211552    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	W0717 11:06:10.212089    9661 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51465: connect: connection refused
	I0717 11:06:10.212114    9661 retry.go:31] will retry after 295.619898ms: dial tcp [::1]:51465: connect: connection refused
	W0717 11:06:10.246755    9661 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 11:06:10.246806    9661 ssh_runner.go:195] Run: systemctl --version
	I0717 11:06:10.248774    9661 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 11:06:10.250557    9661 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 11:06:10.250583    9661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 11:06:10.253902    9661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 11:06:10.258640    9661 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 11:06:10.258647    9661 start.go:495] detecting cgroup driver to use...
	I0717 11:06:10.258726    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:06:10.265983    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0717 11:06:10.269484    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 11:06:10.272577    9661 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 11:06:10.272601    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 11:06:10.275627    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:06:10.278562    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 11:06:10.282017    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:06:10.285420    9661 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 11:06:10.288334    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 11:06:10.291153    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 11:06:10.294486    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 11:06:10.298104    9661 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 11:06:10.301262    9661 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 11:06:10.304171    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:10.382112    9661 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 11:06:10.392816    9661 start.go:495] detecting cgroup driver to use...
	I0717 11:06:10.392893    9661 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 11:06:10.397935    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:06:10.402245    9661 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 11:06:10.413491    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:06:10.418179    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:06:10.423250    9661 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 11:06:10.479405    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:06:10.484632    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:06:10.490329    9661 ssh_runner.go:195] Run: which cri-dockerd
	I0717 11:06:10.491745    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 11:06:10.494462    9661 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 11:06:10.499062    9661 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 11:06:10.581590    9661 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 11:06:10.655582    9661 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 11:06:10.655646    9661 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 11:06:10.662623    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:10.742597    9661 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:06:11.865368    9661 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.122765833s)
	I0717 11:06:11.865436    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 11:06:11.870769    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:06:11.875809    9661 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 11:06:11.947069    9661 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 11:06:12.028520    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:12.107440    9661 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 11:06:12.113746    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:06:12.118565    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:12.196663    9661 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 11:06:12.235393    9661 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 11:06:12.235486    9661 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 11:06:12.238277    9661 start.go:563] Will wait 60s for crictl version
	I0717 11:06:12.238327    9661 ssh_runner.go:195] Run: which crictl
	I0717 11:06:12.239529    9661 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 11:06:12.254618    9661 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0717 11:06:12.254703    9661 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:06:12.270797    9661 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:06:12.290661    9661 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0717 11:06:12.290728    9661 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0717 11:06:12.292515    9661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 11:06:12.296311    9661 kubeadm.go:883] updating cluster {Name:stopped-upgrade-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51499 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0717 11:06:12.296383    9661 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:06:12.296421    9661 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:06:12.311597    9661 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:06:12.311606    9661 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:06:12.311654    9661 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:06:12.314817    9661 ssh_runner.go:195] Run: which lz4
	I0717 11:06:12.316294    9661 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 11:06:12.317541    9661 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 11:06:12.317562    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0717 11:06:13.253695    9661 docker.go:649] duration metric: took 937.439083ms to copy over tarball
	I0717 11:06:13.253751    9661 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 11:06:11.553374    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:11.554006    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:11.591354    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:11.591494    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:11.613184    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:11.613295    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:11.627846    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:11.627917    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:11.640187    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:11.640251    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:11.650907    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:11.650979    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:11.661889    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:11.661950    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:11.676500    9411 logs.go:276] 0 containers: []
	W0717 11:06:11.676513    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:11.676561    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:11.686955    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:11.686973    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:11.686978    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:11.701091    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:11.701102    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:11.719419    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:11.719434    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:11.734115    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:11.734133    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:11.756968    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:11.756995    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:11.795571    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:11.795582    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:11.818389    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:11.818403    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:11.832866    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:11.832878    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:11.848612    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:11.848626    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:11.860823    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:11.860835    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:11.865233    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:11.865249    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:11.877646    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:11.877658    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:11.890305    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:11.890321    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:11.901907    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:11.901922    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:11.939421    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:11.939435    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:11.954223    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:11.954236    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:11.971917    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:11.971929    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:14.413200    9661 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159444958s)
	I0717 11:06:14.413217    9661 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 11:06:14.429096    9661 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:06:14.432089    9661 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0717 11:06:14.437032    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:14.516590    9661 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:06:16.173493    9661 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.656896792s)
	I0717 11:06:16.173601    9661 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:06:16.185696    9661 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:06:16.185704    9661 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:06:16.185709    9661 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 11:06:16.191382    9661 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.193369    9661 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.195187    9661 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.195264    9661 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.197350    9661 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.197404    9661 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.198746    9661 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.198763    9661 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.200244    9661 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.200250    9661 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.201483    9661 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.201488    9661 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 11:06:16.203015    9661 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.203005    9661 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.203888    9661 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 11:06:16.204943    9661 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.607395    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.617831    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.620123    9661 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0717 11:06:16.620148    9661 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.620186    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.630814    9661 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0717 11:06:16.630851    9661 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.630907    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.631729    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0717 11:06:16.641196    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0717 11:06:16.647764    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.648506    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.657199    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.661784    9661 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0717 11:06:16.661797    9661 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0717 11:06:16.661803    9661 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.661807    9661 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.661847    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.661847    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.677797    9661 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0717 11:06:16.677824    9661 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.677877    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.680082    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0717 11:06:16.682363    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0717 11:06:16.682382    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0717 11:06:16.689411    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0717 11:06:16.693795    9661 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0717 11:06:16.693811    9661 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0717 11:06:16.693865    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0717 11:06:16.703993    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0717 11:06:16.704116    9661 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0717 11:06:16.705671    9661 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0717 11:06:16.705684    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0717 11:06:16.714126    9661 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0717 11:06:16.714137    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0717 11:06:16.726365    9661 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0717 11:06:16.726487    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.749699    9661 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0717 11:06:16.749751    9661 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0717 11:06:16.749768    9661 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.749820    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.760323    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 11:06:16.760431    9661 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:06:16.761779    9661 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0717 11:06:16.761791    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0717 11:06:16.805476    9661 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:06:16.805492    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0717 11:06:16.846457    9661 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0717 11:06:16.852881    9661 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 11:06:16.853027    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.864711    9661 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 11:06:16.864732    9661 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.864784    9661 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.878183    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 11:06:16.878302    9661 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:06:16.879722    9661 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 11:06:16.879734    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0717 11:06:16.907145    9661 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:06:16.907167    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0717 11:06:17.146828    9661 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 11:06:17.146865    9661 cache_images.go:92] duration metric: took 961.156708ms to LoadCachedImages
	W0717 11:06:17.146910    9661 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0717 11:06:17.146916    9661 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0717 11:06:17.146981    9661 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-018000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 11:06:17.147045    9661 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 11:06:17.163689    9661 cni.go:84] Creating CNI manager for ""
	I0717 11:06:17.163703    9661 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:06:17.163707    9661 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 11:06:17.163716    9661 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-018000 NodeName:stopped-upgrade-018000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 11:06:17.163781    9661 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-018000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 11:06:17.163832    9661 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0717 11:06:17.166762    9661 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 11:06:17.166789    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 11:06:17.170014    9661 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 11:06:17.175278    9661 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 11:06:17.180097    9661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0717 11:06:17.185357    9661 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0717 11:06:17.186700    9661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 11:06:17.190596    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:17.269431    9661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:06:17.276470    9661 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000 for IP: 10.0.2.15
	I0717 11:06:17.276481    9661 certs.go:194] generating shared ca certs ...
	I0717 11:06:17.276490    9661 certs.go:226] acquiring lock for ca certs: {Name:mk50b621e3b03c5626e0e338e372bd26b7b413d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:17.276659    9661 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.key
	I0717 11:06:17.276715    9661 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.key
	I0717 11:06:17.276720    9661 certs.go:256] generating profile certs ...
	I0717 11:06:17.276814    9661 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.key
	I0717 11:06:17.276834    9661 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key.0811418c
	I0717 11:06:17.276845    9661 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt.0811418c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0717 11:06:17.422657    9661 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt.0811418c ...
	I0717 11:06:17.422670    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt.0811418c: {Name:mkab3957881c9d5f0f16ee6aed288ae575f57d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:17.423228    9661 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key.0811418c ...
	I0717 11:06:17.423242    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key.0811418c: {Name:mk5eacf4c7de8eaeedb0e3634d3614958a122f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:17.423400    9661 certs.go:381] copying /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt.0811418c -> /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt
	I0717 11:06:17.423567    9661 certs.go:385] copying /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key.0811418c -> /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key
	I0717 11:06:17.423740    9661 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/proxy-client.key
	I0717 11:06:17.423875    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336.pem (1338 bytes)
	W0717 11:06:17.423908    9661 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336_empty.pem, impossibly tiny 0 bytes
	I0717 11:06:17.423914    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 11:06:17.423935    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem (1082 bytes)
	I0717 11:06:17.423955    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem (1123 bytes)
	I0717 11:06:17.423971    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem (1679 bytes)
	I0717 11:06:17.424009    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem (1708 bytes)
	I0717 11:06:17.424322    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 11:06:17.431393    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 11:06:17.438592    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 11:06:17.445468    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 11:06:17.452556    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 11:06:17.459140    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 11:06:17.466310    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 11:06:17.473581    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 11:06:17.481333    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 11:06:17.488319    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336.pem --> /usr/share/ca-certificates/7336.pem (1338 bytes)
	I0717 11:06:17.495007    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem --> /usr/share/ca-certificates/73362.pem (1708 bytes)
	I0717 11:06:17.501926    9661 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 11:06:17.507174    9661 ssh_runner.go:195] Run: openssl version
	I0717 11:06:17.509061    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73362.pem && ln -fs /usr/share/ca-certificates/73362.pem /etc/ssl/certs/73362.pem"
	I0717 11:06:17.512084    9661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73362.pem
	I0717 11:06:17.513501    9661 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:49 /usr/share/ca-certificates/73362.pem
	I0717 11:06:17.513522    9661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73362.pem
	I0717 11:06:17.515362    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73362.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 11:06:17.518589    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 11:06:17.521910    9661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:17.523377    9661 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:17.523395    9661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:17.525252    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 11:06:17.528203    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7336.pem && ln -fs /usr/share/ca-certificates/7336.pem /etc/ssl/certs/7336.pem"
	I0717 11:06:17.530945    9661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7336.pem
	I0717 11:06:17.532621    9661 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:49 /usr/share/ca-certificates/7336.pem
	I0717 11:06:17.532646    9661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7336.pem
	I0717 11:06:17.534461    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7336.pem /etc/ssl/certs/51391683.0"
	I0717 11:06:17.537888    9661 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 11:06:17.539468    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 11:06:17.541711    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 11:06:17.543804    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 11:06:17.545852    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 11:06:17.547672    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 11:06:17.549484    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 11:06:17.551321    9661 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51499 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:06:17.551387    9661 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:06:17.561806    9661 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 11:06:17.564985    9661 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 11:06:17.564993    9661 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 11:06:17.565017    9661 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 11:06:17.567737    9661 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:06:17.568048    9661 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-018000" does not appear in /Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:06:17.568148    9661 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-6848/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-018000" cluster setting kubeconfig missing "stopped-upgrade-018000" context setting]
	I0717 11:06:17.568351    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/kubeconfig: {Name:mk327624617af42cf4cf2f31d3ffb3402af5684d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:17.568814    9661 kapi.go:59] client config for stopped-upgrade-018000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c47730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:06:17.569160    9661 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 11:06:17.571842    9661 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-018000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0717 11:06:17.571848    9661 kubeadm.go:1160] stopping kube-system containers ...
	I0717 11:06:17.571887    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:06:17.582344    9661 docker.go:483] Stopping containers: [4550eb8b3005 9e1750a1a505 c74cc3d31c5c 342fc1ee8e0f f263e9f5bbf8 7dc850247de5 de75fc7f8d80 a2cd3facfb95]
	I0717 11:06:17.582411    9661 ssh_runner.go:195] Run: docker stop 4550eb8b3005 9e1750a1a505 c74cc3d31c5c 342fc1ee8e0f f263e9f5bbf8 7dc850247de5 de75fc7f8d80 a2cd3facfb95
	I0717 11:06:17.597219    9661 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 11:06:17.602722    9661 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:06:17.605983    9661 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:06:17.605990    9661 kubeadm.go:157] found existing configuration files:
	
	I0717 11:06:17.606016    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/admin.conf
	I0717 11:06:17.608997    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:06:17.609018    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:06:17.611513    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/kubelet.conf
	I0717 11:06:17.614224    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:06:17.614246    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:06:17.617330    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/controller-manager.conf
	I0717 11:06:17.619874    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:06:17.619894    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:06:17.622584    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/scheduler.conf
	I0717 11:06:17.625494    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:06:17.625520    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:06:17.628255    9661 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:06:17.630915    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:17.652466    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:18.032777    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:18.166452    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:18.189735    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:18.215208    9661 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:06:18.215290    9661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:14.488111    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:18.716938    9661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:19.217358    9661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:19.222149    9661 api_server.go:72] duration metric: took 1.006949917s to wait for apiserver process to appear ...
	I0717 11:06:19.222158    9661 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:06:19.222168    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:19.490233    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:19.490320    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:19.501923    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:19.501999    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:19.512910    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:19.512978    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:19.523754    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:19.523819    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:19.534600    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:19.534671    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:19.546165    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:19.546229    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:19.557244    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:19.557308    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:19.567715    9411 logs.go:276] 0 containers: []
	W0717 11:06:19.567727    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:19.567783    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:19.579032    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:19.579049    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:19.579055    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:19.591021    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:19.591033    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:19.605740    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:19.605750    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:19.617417    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:19.617429    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:19.642200    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:19.642212    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:19.684467    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:19.684479    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:19.706941    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:19.706955    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:19.723747    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:19.723766    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:19.748241    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:19.748255    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:19.764364    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:19.764377    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:19.782710    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:19.782722    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:19.822847    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:19.822866    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:19.828045    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:19.828053    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:19.843123    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:19.843136    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:19.861694    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:19.861708    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:19.876167    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:19.876181    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:19.888350    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:19.888362    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:22.402661    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:24.223942    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:24.223980    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:27.404991    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:27.405203    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:27.426532    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:27.426623    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:27.449783    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:27.449850    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:27.461282    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:27.461348    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:27.471384    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:27.471454    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:27.489471    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:27.489531    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:27.499950    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:27.500020    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:27.510730    9411 logs.go:276] 0 containers: []
	W0717 11:06:27.510744    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:27.510796    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:27.521699    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:27.521718    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:27.521724    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:27.558956    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:27.558970    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:27.578678    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:27.578692    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:27.600396    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:27.600407    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:27.611958    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:27.611970    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:27.623411    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:27.623424    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:27.638435    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:27.638449    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:27.649404    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:27.649415    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:27.684859    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:27.684868    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:27.698498    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:27.698507    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:27.715837    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:27.715852    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:27.727829    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:27.727840    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:27.732842    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:27.732852    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:27.744143    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:27.744158    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:27.761234    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:27.761246    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:27.775515    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:27.775528    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:27.800170    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:27.800183    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:29.224207    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:29.224259    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:30.316640    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:34.224751    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:34.224827    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:35.317334    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:35.317500    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:06:35.335736    9411 logs.go:276] 2 containers: [794576cf4600 4dcb4cd5c6c7]
	I0717 11:06:35.335821    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:06:35.352907    9411 logs.go:276] 2 containers: [f770b3113fdd 68bd446affdc]
	I0717 11:06:35.352977    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:06:35.363644    9411 logs.go:276] 1 containers: [f64e39217d8f]
	I0717 11:06:35.363709    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:06:35.374555    9411 logs.go:276] 2 containers: [aa344240c59b 3ac0094052c5]
	I0717 11:06:35.374624    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:06:35.385208    9411 logs.go:276] 1 containers: [7b2847ec9476]
	I0717 11:06:35.385271    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:06:35.395600    9411 logs.go:276] 2 containers: [a1ee166fddfd 480d72487256]
	I0717 11:06:35.395672    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:06:35.406614    9411 logs.go:276] 0 containers: []
	W0717 11:06:35.406625    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:06:35.406681    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:06:35.416941    9411 logs.go:276] 2 containers: [569c12fe0be8 ffb9ed2524a5]
	I0717 11:06:35.416964    9411 logs.go:123] Gathering logs for etcd [68bd446affdc] ...
	I0717 11:06:35.416970    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68bd446affdc"
	I0717 11:06:35.434766    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:06:35.434776    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:06:35.457268    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:06:35.457276    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:06:35.491203    9411 logs.go:123] Gathering logs for kube-proxy [7b2847ec9476] ...
	I0717 11:06:35.491215    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2847ec9476"
	I0717 11:06:35.503042    9411 logs.go:123] Gathering logs for storage-provisioner [569c12fe0be8] ...
	I0717 11:06:35.503055    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 569c12fe0be8"
	I0717 11:06:35.514704    9411 logs.go:123] Gathering logs for kube-controller-manager [a1ee166fddfd] ...
	I0717 11:06:35.514714    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1ee166fddfd"
	I0717 11:06:35.532154    9411 logs.go:123] Gathering logs for kube-controller-manager [480d72487256] ...
	I0717 11:06:35.532167    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 480d72487256"
	I0717 11:06:35.546987    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:06:35.546999    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:06:35.584202    9411 logs.go:123] Gathering logs for etcd [f770b3113fdd] ...
	I0717 11:06:35.584210    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f770b3113fdd"
	I0717 11:06:35.597857    9411 logs.go:123] Gathering logs for coredns [f64e39217d8f] ...
	I0717 11:06:35.597868    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64e39217d8f"
	I0717 11:06:35.609705    9411 logs.go:123] Gathering logs for kube-scheduler [aa344240c59b] ...
	I0717 11:06:35.609720    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa344240c59b"
	I0717 11:06:35.625788    9411 logs.go:123] Gathering logs for kube-scheduler [3ac0094052c5] ...
	I0717 11:06:35.625798    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ac0094052c5"
	I0717 11:06:35.640825    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:06:35.640841    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:06:35.645909    9411 logs.go:123] Gathering logs for kube-apiserver [794576cf4600] ...
	I0717 11:06:35.645920    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794576cf4600"
	I0717 11:06:35.660922    9411 logs.go:123] Gathering logs for kube-apiserver [4dcb4cd5c6c7] ...
	I0717 11:06:35.660935    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4dcb4cd5c6c7"
	I0717 11:06:35.681590    9411 logs.go:123] Gathering logs for storage-provisioner [ffb9ed2524a5] ...
	I0717 11:06:35.681603    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffb9ed2524a5"
	I0717 11:06:35.692929    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:06:35.692940    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:06:38.208346    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:39.225354    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:39.225404    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:43.210716    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:43.210863    9411 kubeadm.go:597] duration metric: took 4m4.13909275s to restartPrimaryControlPlane
	W0717 11:06:43.210984    9411 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 11:06:43.211033    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 11:06:44.210116    9411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 11:06:44.215038    9411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:06:44.217823    9411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:06:44.220639    9411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:06:44.220645    9411 kubeadm.go:157] found existing configuration files:
	
	I0717 11:06:44.220663    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/admin.conf
	I0717 11:06:44.223617    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:06:44.223643    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:06:44.226545    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/kubelet.conf
	I0717 11:06:44.229364    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:06:44.229388    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:06:44.232614    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/controller-manager.conf
	I0717 11:06:44.235598    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:06:44.235624    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:06:44.238234    9411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/scheduler.conf
	I0717 11:06:44.240896    9411 kubeadm.go:163] "https://control-plane.minikube.internal:51278" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51278 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:06:44.240920    9411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:06:44.243795    9411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 11:06:44.262790    9411 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0717 11:06:44.262831    9411 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 11:06:44.312912    9411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 11:06:44.312989    9411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 11:06:44.313042    9411 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 11:06:44.361349    9411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 11:06:44.369290    9411 out.go:204]   - Generating certificates and keys ...
	I0717 11:06:44.369328    9411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 11:06:44.369358    9411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 11:06:44.369431    9411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 11:06:44.369513    9411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 11:06:44.369550    9411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 11:06:44.369588    9411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 11:06:44.369632    9411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 11:06:44.369663    9411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 11:06:44.369703    9411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 11:06:44.370271    9411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 11:06:44.370290    9411 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 11:06:44.370315    9411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 11:06:44.408965    9411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 11:06:44.533568    9411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 11:06:44.603927    9411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 11:06:44.719135    9411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 11:06:44.748291    9411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 11:06:44.749442    9411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 11:06:44.749465    9411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 11:06:44.835270    9411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 11:06:44.226135    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:44.226198    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:44.839428    9411 out.go:204]   - Booting up control plane ...
	I0717 11:06:44.839473    9411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 11:06:44.839536    9411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 11:06:44.839579    9411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 11:06:44.839627    9411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 11:06:44.839723    9411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 11:06:49.340283    9411 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.506141 seconds
	I0717 11:06:49.340366    9411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 11:06:49.344922    9411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 11:06:49.853562    9411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 11:06:49.853782    9411 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-462000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 11:06:50.357907    9411 kubeadm.go:310] [bootstrap-token] Using token: 8b84m6.t6w3sse1ymni0yha
	I0717 11:06:50.361137    9411 out.go:204]   - Configuring RBAC rules ...
	I0717 11:06:50.361209    9411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 11:06:50.361257    9411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 11:06:50.363261    9411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 11:06:50.364604    9411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 11:06:50.365464    9411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 11:06:50.366668    9411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 11:06:50.369461    9411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 11:06:50.569857    9411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 11:06:50.763650    9411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 11:06:50.764388    9411 kubeadm.go:310] 
	I0717 11:06:50.764424    9411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 11:06:50.764426    9411 kubeadm.go:310] 
	I0717 11:06:50.764464    9411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 11:06:50.764472    9411 kubeadm.go:310] 
	I0717 11:06:50.764489    9411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 11:06:50.764519    9411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 11:06:50.764544    9411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 11:06:50.764547    9411 kubeadm.go:310] 
	I0717 11:06:50.764586    9411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 11:06:50.764589    9411 kubeadm.go:310] 
	I0717 11:06:50.764615    9411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 11:06:50.764618    9411 kubeadm.go:310] 
	I0717 11:06:50.764646    9411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 11:06:50.764689    9411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 11:06:50.764730    9411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 11:06:50.764734    9411 kubeadm.go:310] 
	I0717 11:06:50.764782    9411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 11:06:50.764823    9411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 11:06:50.764827    9411 kubeadm.go:310] 
	I0717 11:06:50.764869    9411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8b84m6.t6w3sse1ymni0yha \
	I0717 11:06:50.764920    9411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:41cf485c9ce3ed322472481a1cde965b121021497c454b4b9fd17940d4869b14 \
	I0717 11:06:50.764931    9411 kubeadm.go:310] 	--control-plane 
	I0717 11:06:50.764935    9411 kubeadm.go:310] 
	I0717 11:06:50.764981    9411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 11:06:50.764984    9411 kubeadm.go:310] 
	I0717 11:06:50.765040    9411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8b84m6.t6w3sse1ymni0yha \
	I0717 11:06:50.765103    9411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:41cf485c9ce3ed322472481a1cde965b121021497c454b4b9fd17940d4869b14 
	I0717 11:06:50.765157    9411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 11:06:50.765229    9411 cni.go:84] Creating CNI manager for ""
	I0717 11:06:50.765238    9411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:06:50.773801    9411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 11:06:50.777830    9411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 11:06:50.780936    9411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 11:06:50.786743    9411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 11:06:50.786823    9411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-462000 minikube.k8s.io/updated_at=2024_07_17T11_06_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=running-upgrade-462000 minikube.k8s.io/primary=true
	I0717 11:06:50.786824    9411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 11:06:50.794152    9411 ops.go:34] apiserver oom_adj: -16
	I0717 11:06:50.829750    9411 kubeadm.go:1113] duration metric: took 42.967ms to wait for elevateKubeSystemPrivileges
	I0717 11:06:50.833031    9411 kubeadm.go:394] duration metric: took 4m11.775539375s to StartCluster
	I0717 11:06:50.833047    9411 settings.go:142] acquiring lock: {Name:mk52ddc32cf249ba715452a288aa286713554b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:50.833205    9411 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:06:50.833640    9411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/kubeconfig: {Name:mk327624617af42cf4cf2f31d3ffb3402af5684d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:50.833855    9411 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:06:50.833960    9411 config.go:182] Loaded profile config "running-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:06:50.833907    9411 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 11:06:50.834002    9411 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-462000"
	I0717 11:06:50.834011    9411 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-462000"
	I0717 11:06:50.834013    9411 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-462000"
	W0717 11:06:50.834017    9411 addons.go:243] addon storage-provisioner should already be in state true
	I0717 11:06:50.834023    9411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-462000"
	I0717 11:06:50.834027    9411 host.go:66] Checking if "running-upgrade-462000" exists ...
	I0717 11:06:50.835063    9411 kapi.go:59] client config for running-upgrade-462000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/running-upgrade-462000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fc3730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:06:50.835188    9411 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-462000"
	W0717 11:06:50.835193    9411 addons.go:243] addon default-storageclass should already be in state true
	I0717 11:06:50.835200    9411 host.go:66] Checking if "running-upgrade-462000" exists ...
	I0717 11:06:50.837828    9411 out.go:177] * Verifying Kubernetes components...
	I0717 11:06:50.838132    9411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 11:06:50.841977    9411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 11:06:50.841985    9411 sshutil.go:53] new ssh client: &{IP:localhost Port:51246 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/running-upgrade-462000/id_rsa Username:docker}
	I0717 11:06:50.845664    9411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:49.226930    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:49.226983    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:50.849787    9411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:50.853807    9411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:06:50.853813    9411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 11:06:50.853819    9411 sshutil.go:53] new ssh client: &{IP:localhost Port:51246 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/running-upgrade-462000/id_rsa Username:docker}
	I0717 11:06:50.935525    9411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:06:50.940433    9411 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:06:50.940476    9411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:50.944524    9411 api_server.go:72] duration metric: took 110.657542ms to wait for apiserver process to appear ...
	I0717 11:06:50.944535    9411 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:06:50.944542    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:50.980660    9411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 11:06:50.996933    9411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:06:54.227981    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:54.228030    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:55.944891    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:55.944931    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:59.229366    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:59.229404    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:00.946577    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:00.946620    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:04.231069    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:04.231103    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:05.947008    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:05.947031    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:09.231892    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:09.231930    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:10.947363    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:10.947422    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:14.234124    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:14.234149    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:15.947911    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:15.947953    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:20.948565    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:20.948586    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:07:21.353362    9411 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0717 11:07:21.356684    9411 out.go:177] * Enabled addons: storage-provisioner
	I0717 11:07:19.236300    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:19.236444    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:19.251380    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:19.251467    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:19.263371    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:19.263442    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:19.274402    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:19.274469    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:19.284734    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:19.284804    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:19.295692    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:19.295760    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:19.306334    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:19.306401    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:19.316690    9661 logs.go:276] 0 containers: []
	W0717 11:07:19.316702    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:19.316763    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:19.326958    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:19.326979    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:19.326984    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:19.441845    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:19.441857    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:19.455807    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:19.455818    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:19.471244    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:19.471254    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:19.484219    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:19.484230    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:19.523149    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:19.523156    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:19.564122    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:19.564133    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:19.582611    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:19.582624    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:19.594161    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:19.594172    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:19.605928    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:19.605940    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:19.618496    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:19.618508    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:19.644027    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:19.644038    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:19.655876    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:19.655887    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:19.660274    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:19.660287    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:19.673959    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:19.673970    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:19.686013    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:19.686025    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:22.204038    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:21.364581    9411 addons.go:510] duration metric: took 30.530900291s for enable addons: enabled=[storage-provisioner]
	I0717 11:07:27.206258    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:27.206453    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:27.223886    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:27.223962    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:27.237099    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:27.237180    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:27.248964    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:27.249034    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:27.263176    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:27.263243    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:27.273604    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:27.273678    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:27.284376    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:27.284455    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:27.294961    9661 logs.go:276] 0 containers: []
	W0717 11:07:27.294974    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:27.295029    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:27.306053    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:27.306072    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:27.306079    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:27.310733    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:27.310742    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:27.348686    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:27.348699    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:27.361991    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:27.362001    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:27.379379    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:27.379392    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:27.403639    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:27.403649    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:27.415130    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:27.415140    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:27.428307    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:27.428317    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:27.439778    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:27.439788    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:27.477608    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:27.477619    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:27.498919    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:27.498932    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:27.512960    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:27.512976    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:27.524924    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:27.524935    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:27.563474    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:27.563489    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:27.575319    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:27.575329    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:27.586931    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:27.586942    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:25.949759    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:25.949807    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:30.106758    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:30.951021    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:30.951074    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:35.109255    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:35.109443    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:35.126677    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:35.126778    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:35.140386    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:35.140464    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:35.151999    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:35.152074    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:35.162447    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:35.162522    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:35.175888    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:35.175959    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:35.187214    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:35.187283    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:35.197270    9661 logs.go:276] 0 containers: []
	W0717 11:07:35.197280    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:35.197339    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:35.207763    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:35.207783    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:35.207788    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:35.222005    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:35.222016    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:35.233963    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:35.233975    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:35.251986    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:35.251998    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:35.270556    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:35.270570    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:35.281995    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:35.282007    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:35.293532    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:35.293544    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:35.306832    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:35.306843    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:35.320924    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:35.320934    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:35.336154    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:35.336164    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:35.348473    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:35.348484    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:35.385521    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:35.385538    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:35.410144    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:35.410155    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:35.421830    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:35.421843    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:35.459381    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:35.459389    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:35.463267    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:35.463275    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:38.001397    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:35.952975    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:35.953016    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:43.002873    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:43.002989    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:43.020910    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:43.020984    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:43.032301    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:43.032380    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:43.042305    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:43.042377    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:43.052857    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:43.052927    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:43.063867    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:43.063931    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:43.074410    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:43.074475    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:43.085117    9661 logs.go:276] 0 containers: []
	W0717 11:07:43.085128    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:43.085180    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:43.095832    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:43.095853    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:43.095858    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:43.109991    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:43.110002    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:43.121924    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:43.121935    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:43.138812    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:43.138824    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:43.152920    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:43.152931    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:43.177798    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:43.177805    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:43.191880    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:43.191893    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:43.228749    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:43.228760    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:43.243892    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:43.243902    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:43.255388    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:43.255398    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:43.259675    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:43.259687    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:43.273871    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:43.273883    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:43.285548    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:43.285558    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:43.297265    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:43.297281    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:43.309584    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:43.309595    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:43.349177    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:43.349187    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:40.954952    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:40.955008    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:45.888666    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:45.956524    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:45.956567    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:50.891416    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:50.891591    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:50.911289    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:50.911390    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:50.927895    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:50.927969    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:50.939631    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:50.939704    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:50.951492    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:50.951570    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:50.962135    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:50.962201    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:50.974430    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:50.974501    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:50.986014    9661 logs.go:276] 0 containers: []
	W0717 11:07:50.986025    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:50.986084    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:51.003412    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:51.003430    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:51.003436    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:51.017085    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:51.017098    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:51.030093    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:51.030106    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:51.049313    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:51.049326    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:51.064624    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:51.064636    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:51.077650    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:51.077662    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:51.097318    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:51.097330    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:51.111781    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:51.111793    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:51.124022    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:51.124032    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:51.148896    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:51.148914    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:51.187375    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:51.187391    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:51.226049    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:51.226062    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:51.230762    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:51.230774    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:51.249097    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:51.249108    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:51.289803    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:51.289817    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:51.308806    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:51.308820    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:50.958814    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:50.958896    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:50.970129    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:07:50.970204    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:50.989580    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:07:50.989680    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:51.022404    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:07:51.022477    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:51.034319    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:07:51.034389    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:51.045806    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:07:51.045896    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:51.057644    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:07:51.057726    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:51.068840    9411 logs.go:276] 0 containers: []
	W0717 11:07:51.068853    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:51.068909    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:51.079660    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:07:51.079675    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:51.079681    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:51.121925    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:07:51.121947    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:07:51.138147    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:07:51.138163    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:07:51.156904    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:07:51.156916    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:07:51.173475    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:07:51.173490    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:07:51.193862    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:51.193873    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:51.219701    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:07:51.219711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:51.231864    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:51.231873    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:51.237005    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:51.237016    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:51.321950    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:07:51.321964    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:07:51.334725    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:07:51.334740    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:07:51.346360    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:07:51.346370    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:07:51.358808    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:07:51.358818    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:07:53.823030    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:53.872002    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:58.825308    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:58.825498    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:58.843961    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:58.844051    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:58.857842    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:58.857919    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:58.871751    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:58.871818    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:58.883803    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:58.883882    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:58.894928    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:58.895001    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:58.906511    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:58.906587    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:58.917908    9661 logs.go:276] 0 containers: []
	W0717 11:07:58.917921    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:58.917988    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:58.929849    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:58.929869    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:58.929875    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:58.944337    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:58.944351    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:58.985729    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:58.985742    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:59.025896    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:59.025909    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:59.041442    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:59.041460    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:59.057753    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:59.057769    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:59.071711    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:59.071724    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:59.113095    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:59.113117    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:59.128066    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:59.128081    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:59.147374    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:59.147385    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:59.172976    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:59.172985    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:59.186071    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:59.186083    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:59.203218    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:59.203229    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:59.215620    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:59.215631    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:59.219752    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:59.219759    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:59.234975    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:59.234986    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:01.754964    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:58.872307    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:58.872396    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:58.884076    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:07:58.884113    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:58.895550    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:07:58.895584    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:58.908799    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:07:58.908860    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:58.920689    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:07:58.920757    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:58.932384    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:07:58.932457    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:58.946254    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:07:58.946320    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:58.957399    9411 logs.go:276] 0 containers: []
	W0717 11:07:58.957411    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:58.957471    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:58.968407    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:07:58.968423    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:58.968428    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:59.007691    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:59.007711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:59.012665    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:59.012675    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:59.080673    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:07:59.080686    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:07:59.093780    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:07:59.093790    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:07:59.106178    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:07:59.106189    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:07:59.121602    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:07:59.121615    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:07:59.134625    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:59.134636    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:59.159773    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:07:59.159783    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:59.171346    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:07:59.171358    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:07:59.187666    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:07:59.187674    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:07:59.202845    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:07:59.202856    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:07:59.221366    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:07:59.221375    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:01.740381    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:06.757201    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:06.757257    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:06.772839    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:06.772908    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:06.784128    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:06.784176    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:06.795605    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:06.795659    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:06.806711    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:06.806757    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:06.817526    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:06.817563    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:06.828663    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:06.828699    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:06.840148    9661 logs.go:276] 0 containers: []
	W0717 11:08:06.840157    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:06.840211    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:06.858619    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:06.858635    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:06.858639    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:06.900083    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:06.900098    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:06.914904    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:06.914920    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:06.927305    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:06.927317    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:06.939801    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:06.939813    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:06.953935    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:06.953949    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:06.966984    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:06.966997    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:06.980356    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:06.980368    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:06.999027    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:06.999038    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:07.024510    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:07.024521    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:07.029779    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:07.029788    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:07.045768    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:07.045780    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:07.058032    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:07.058045    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:07.098155    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:07.098166    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:07.140617    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:07.140631    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:07.155014    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:07.155025    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:06.741441    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:06.741565    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:06.754142    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:06.754222    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:06.769257    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:06.769324    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:06.782463    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:06.782540    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:06.793978    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:06.794052    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:06.805112    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:06.805182    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:06.816678    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:06.816744    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:06.828176    9411 logs.go:276] 0 containers: []
	W0717 11:08:06.828188    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:06.828245    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:06.839697    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:06.839713    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:06.839718    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:06.858028    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:06.858039    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:06.879162    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:06.879175    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:06.892057    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:06.892069    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:06.907199    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:06.907210    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:06.919486    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:06.919498    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:06.935128    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:06.935140    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:06.948127    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:06.948138    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:06.961477    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:06.961495    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:06.987136    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:06.987152    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:07.027694    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:07.027711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:07.033206    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:07.033218    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:07.071959    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:07.071971    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:09.671906    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:09.592539    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:14.674162    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:14.674235    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:14.686320    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:14.686397    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:14.708079    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:14.708146    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:14.719587    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:14.719654    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:14.731640    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:14.731711    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:14.743000    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:14.743070    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:14.754136    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:14.754204    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:14.764815    9661 logs.go:276] 0 containers: []
	W0717 11:08:14.764827    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:14.764885    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:14.776420    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:14.776436    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:14.776442    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:14.789037    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:14.789050    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:14.806511    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:14.806525    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:14.821580    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:14.821593    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:14.834511    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:14.834524    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:14.853921    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:14.853935    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:14.866714    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:14.866730    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:14.904279    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:14.904288    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:14.919386    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:14.919399    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:14.932517    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:14.932529    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:14.956826    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:14.956837    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:14.976244    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:14.976255    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:14.981002    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:14.981012    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:15.023758    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:15.023770    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:15.035811    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:15.035823    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:15.049361    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:15.049374    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:17.588001    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:14.595230    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:14.595404    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:14.608433    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:14.608507    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:14.621335    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:14.621399    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:14.632027    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:14.632098    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:14.642065    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:14.642133    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:14.653534    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:14.653602    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:14.666956    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:14.667021    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:14.686763    9411 logs.go:276] 0 containers: []
	W0717 11:08:14.686772    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:14.686798    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:14.699409    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:14.699425    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:14.699430    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:14.712078    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:14.712095    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:14.724810    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:14.724822    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:14.744957    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:14.744966    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:14.758330    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:14.758343    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:14.782958    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:14.782974    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:14.821917    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:14.821927    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:14.861040    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:14.861052    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:14.877801    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:14.877812    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:14.890787    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:14.890799    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:14.903874    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:14.903886    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:14.908957    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:14.908970    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:14.924750    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:14.924760    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:17.442965    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:22.590125    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:22.590214    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:22.602256    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:22.602335    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:22.613990    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:22.614061    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:22.627428    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:22.627494    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:22.643331    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:22.643396    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:22.654997    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:22.655072    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:22.666546    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:22.666621    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:22.677864    9661 logs.go:276] 0 containers: []
	W0717 11:08:22.677875    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:22.677938    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:22.689022    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:22.689044    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:22.689051    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:22.693628    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:22.693635    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:22.708450    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:22.708467    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:22.723402    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:22.723413    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:22.741916    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:22.741931    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:22.755020    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:22.755033    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:22.767551    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:22.767562    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:22.807919    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:22.807929    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:22.830958    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:22.830969    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:22.854087    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:22.854095    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:22.913360    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:22.913373    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:22.931964    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:22.931975    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:22.945500    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:22.945511    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:22.986336    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:22.986350    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:23.001176    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:23.001191    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:23.012540    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:23.012553    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:22.445227    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:22.445439    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:22.464551    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:22.464644    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:22.479017    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:22.479097    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:22.491048    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:22.491125    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:22.501855    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:22.501922    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:22.512451    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:22.512519    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:22.523471    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:22.523542    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:22.534195    9411 logs.go:276] 0 containers: []
	W0717 11:08:22.534206    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:22.534266    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:22.545148    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:22.545164    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:22.545169    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:22.549767    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:22.549774    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:22.584019    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:22.584031    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:22.598811    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:22.598823    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:22.613531    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:22.613545    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:22.630065    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:22.630074    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:22.648601    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:22.648615    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:22.661004    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:22.661016    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:22.701984    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:22.702003    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:22.719677    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:22.719689    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:22.736281    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:22.736294    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:22.750163    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:22.750175    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:22.762839    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:22.762850    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:25.526531    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:25.291009    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:30.528875    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:30.528973    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:30.540340    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:30.540408    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:30.556528    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:30.556601    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:30.567615    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:30.567685    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:30.579445    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:30.579520    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:30.590520    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:30.590586    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:30.602081    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:30.602144    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:30.613927    9661 logs.go:276] 0 containers: []
	W0717 11:08:30.613939    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:30.613996    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:30.625897    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:30.625913    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:30.625918    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:30.665177    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:30.665198    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:30.700687    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:30.700697    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:30.718223    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:30.718235    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:30.732060    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:30.732071    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:30.743997    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:30.744007    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:30.768212    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:30.768219    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:30.807159    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:30.807174    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:30.822814    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:30.822824    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:30.836303    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:30.836314    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:30.847683    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:30.847693    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:30.858982    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:30.858994    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:30.863278    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:30.863286    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:30.877060    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:30.877072    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:30.894665    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:30.894676    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:30.913366    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:30.913377    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:33.429756    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:30.293327    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:30.293526    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:30.314600    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:30.314699    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:30.334159    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:30.334222    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:30.346078    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:30.346169    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:30.356938    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:30.357010    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:30.367162    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:30.367238    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:30.377741    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:30.377809    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:30.388168    9411 logs.go:276] 0 containers: []
	W0717 11:08:30.388181    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:30.388240    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:30.398659    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:30.398675    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:30.398681    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:30.410686    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:30.410698    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:30.430286    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:30.430300    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:30.454352    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:30.454360    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:30.490580    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:30.490590    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:30.507211    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:30.507221    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:30.518730    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:30.518741    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:30.536058    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:30.536069    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:30.548216    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:30.548229    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:30.561360    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:30.561372    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:30.601661    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:30.601683    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:30.606810    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:30.606821    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:30.624926    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:30.624939    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:33.145478    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:38.431923    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:38.432076    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:38.445399    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:38.445473    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:38.456635    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:38.456706    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:38.467433    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:38.467503    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:38.478581    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:38.478659    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:38.146870    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:38.147121    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:38.175398    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:38.175513    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:38.192587    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:38.192672    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:38.205603    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:38.205669    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:38.215894    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:38.215973    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:38.227066    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:38.227136    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:38.238470    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:38.238537    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:38.248994    9411 logs.go:276] 0 containers: []
	W0717 11:08:38.249005    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:38.249058    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:38.261272    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:38.261290    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:38.261295    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:38.265785    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:38.265792    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:38.279420    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:38.279431    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:38.292337    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:38.292350    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:38.307320    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:38.307336    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:38.329941    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:38.329952    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:38.367448    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:38.367456    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:38.381710    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:38.381724    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:38.393215    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:38.393225    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:38.405042    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:38.405053    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:38.423467    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:38.423478    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:38.436164    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:38.436175    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:38.462574    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:38.462593    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:38.491346    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:38.491438    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:38.503317    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:38.503387    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:38.513639    9661 logs.go:276] 0 containers: []
	W0717 11:08:38.513651    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:38.513711    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:38.524335    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:38.524353    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:38.524359    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:38.528616    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:38.528623    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:38.542499    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:38.542509    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:38.559808    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:38.559818    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:38.573315    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:38.573325    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:38.615397    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:38.615409    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:38.627921    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:38.627931    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:38.649318    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:38.649329    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:38.661485    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:38.661499    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:38.679706    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:38.679717    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:38.691547    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:38.691558    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:38.703200    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:38.703210    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:38.742528    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:38.742537    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:38.781014    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:38.781024    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:38.796136    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:38.796146    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:38.814173    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:38.814186    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:41.341348    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:41.003842    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:46.343553    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:46.343670    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:46.354970    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:46.355044    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:46.367237    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:46.367306    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:46.377457    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:46.377527    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:46.388402    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:46.388476    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:46.403398    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:46.403475    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:46.414085    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:46.414150    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:46.424695    9661 logs.go:276] 0 containers: []
	W0717 11:08:46.424709    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:46.424769    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:46.435372    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:46.435391    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:46.435397    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:46.449350    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:46.449363    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:46.489377    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:46.489389    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:46.500884    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:46.500897    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:46.514720    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:46.514731    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:46.525982    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:46.525993    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:46.540207    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:46.540219    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:46.557432    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:46.557443    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:46.568792    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:46.568804    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:46.581757    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:46.581770    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:46.606930    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:46.606940    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:46.618848    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:46.618864    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:46.630477    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:46.630487    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:46.670814    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:46.670832    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:46.675471    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:46.675479    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:46.711567    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:46.711577    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:46.006144    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:46.006339    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:46.022604    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:46.022676    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:46.034397    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:46.034462    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:46.045251    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:46.045343    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:46.055794    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:46.055857    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:46.065997    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:46.066059    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:46.076723    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:46.076784    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:46.087391    9411 logs.go:276] 0 containers: []
	W0717 11:08:46.087402    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:46.087457    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:46.098036    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:46.098051    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:46.098056    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:46.112942    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:46.112952    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:46.124531    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:46.124543    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:46.142394    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:46.142408    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:46.157874    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:46.157887    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:46.169204    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:46.169216    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:46.180658    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:46.180669    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:46.218279    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:46.218288    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:46.222701    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:46.222711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:46.233829    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:46.233840    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:46.252170    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:46.252182    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:46.276056    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:46.276066    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:46.311964    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:46.311976    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:49.227447    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:48.827475    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:54.228794    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:54.228926    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:54.242493    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:54.242558    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:54.255073    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:54.255142    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:54.265278    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:54.265344    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:54.275954    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:54.276026    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:54.286711    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:54.286771    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:54.297553    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:54.297621    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:54.307904    9661 logs.go:276] 0 containers: []
	W0717 11:08:54.307915    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:54.307972    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:54.321757    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:54.321776    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:54.321781    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:54.337139    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:54.337150    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:54.351870    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:54.351882    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:54.369400    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:54.369410    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:54.383619    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:54.383629    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:54.407196    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:54.407207    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:54.444676    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:54.444686    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:54.467941    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:54.467953    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:54.506665    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:54.506683    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:54.525117    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:54.525126    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:54.538100    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:54.538112    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:54.549352    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:54.549363    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:54.561395    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:54.561406    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:54.573472    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:54.573482    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:54.577738    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:54.577747    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:54.613323    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:54.613334    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:57.128514    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:53.828202    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:53.828440    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:53.844597    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:08:53.844680    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:53.857498    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:08:53.857572    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:53.868953    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:08:53.869014    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:53.879228    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:08:53.879291    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:53.889811    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:08:53.889871    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:53.900389    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:08:53.900458    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:53.911698    9411 logs.go:276] 0 containers: []
	W0717 11:08:53.911710    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:53.911766    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:53.922003    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:08:53.922017    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:53.922022    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:53.926407    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:08:53.926416    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:08:53.938304    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:53.938317    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:53.963506    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:08:53.963516    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:08:53.982136    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:53.982147    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:54.018671    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:54.018681    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:54.053718    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:08:54.053732    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:08:54.068368    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:08:54.068377    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:08:54.082063    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:08:54.082075    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:08:54.100967    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:08:54.100977    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:08:54.112986    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:08:54.112997    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:08:54.131611    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:08:54.131622    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:08:54.143511    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:08:54.143522    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:56.657409    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:02.131056    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:02.131207    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:02.146846    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:02.146923    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:02.159360    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:02.159436    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:02.170332    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:02.170398    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:02.180475    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:02.180545    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:02.190677    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:02.190742    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:02.201116    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:02.201182    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:02.211229    9661 logs.go:276] 0 containers: []
	W0717 11:09:02.211242    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:02.211298    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:02.221616    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:02.221634    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:02.221640    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:02.235638    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:02.235648    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:02.248955    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:02.248966    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:02.286583    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:02.286597    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:02.322110    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:02.322123    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:02.336604    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:02.336617    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:02.350759    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:02.350772    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:02.362799    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:02.362813    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:02.380458    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:02.380467    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:02.392261    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:02.392273    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:02.412029    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:02.412044    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:02.423813    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:02.423827    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:02.448353    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:02.448364    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:02.452310    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:02.452316    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:02.491655    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:02.491669    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:02.502955    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:02.502966    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:01.659679    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:01.659893    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:01.680555    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:01.680649    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:01.695487    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:01.695560    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:01.708082    9411 logs.go:276] 2 containers: [a81614bf5b1d 54f9d88ef059]
	I0717 11:09:01.708147    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:01.718446    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:01.718519    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:01.729093    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:01.729162    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:01.739508    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:01.739576    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:01.750050    9411 logs.go:276] 0 containers: []
	W0717 11:09:01.750060    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:01.750116    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:01.760429    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:01.760448    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:01.760454    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:01.774772    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:01.774782    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:01.790409    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:01.790422    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:01.803390    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:01.803400    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:01.817899    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:01.817909    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:01.829807    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:01.829817    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:01.844162    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:01.844174    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:01.849061    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:01.849068    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:01.884294    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:01.884305    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:01.902552    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:01.902562    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:01.914924    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:01.914937    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:01.939900    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:01.939911    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:01.951980    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:01.951992    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:05.019895    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:04.490560    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:10.022090    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:10.022244    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:10.037804    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:10.037873    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:10.049042    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:10.049114    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:10.059746    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:10.059811    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:10.070308    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:10.070369    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:10.080519    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:10.080586    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:10.096753    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:10.096825    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:10.106961    9661 logs.go:276] 0 containers: []
	W0717 11:09:10.106973    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:10.107030    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:10.121900    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:10.121919    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:10.121924    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:10.134505    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:10.134520    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:10.148721    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:10.148736    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:10.163527    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:10.163542    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:10.174852    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:10.174866    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:10.199098    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:10.199105    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:10.213633    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:10.213650    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:10.224586    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:10.224598    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:10.236349    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:10.236361    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:10.279143    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:10.279153    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:10.291161    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:10.291175    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:10.304438    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:10.304454    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:10.322679    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:10.322694    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:10.360318    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:10.360326    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:10.364222    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:10.364229    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:10.402707    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:10.402723    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:12.919333    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:09.492801    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:09.493090    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:09.513887    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:09.513990    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:09.528442    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:09.528526    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:09.544255    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:09.544324    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:09.554480    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:09.554538    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:09.564681    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:09.564752    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:09.580372    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:09.580448    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:09.590576    9411 logs.go:276] 0 containers: []
	W0717 11:09:09.590586    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:09.590635    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:09.601002    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:09.601019    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:09.601026    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:09.639838    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:09.639848    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:09.651458    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:09.651471    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:09.662612    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:09.662624    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:09.677303    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:09.677314    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:09.689624    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:09.689638    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:09.701028    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:09.701039    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:09.705884    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:09.705893    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:09.723158    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:09.723172    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:09.735001    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:09.735011    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:09.751157    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:09.751167    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:09.775271    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:09.775280    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:09.787264    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:09.787273    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:09.801090    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:09.801102    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:09.822802    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:09.822815    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:12.361696    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:17.921515    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:17.921621    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:17.933177    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:17.933257    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:17.943472    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:17.943543    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:17.953853    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:17.953923    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:17.964422    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:17.964488    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:17.975305    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:17.975367    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:17.985944    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:17.986015    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:17.996114    9661 logs.go:276] 0 containers: []
	W0717 11:09:17.996131    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:17.996187    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:18.006779    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:18.006795    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:18.006800    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:18.026202    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:18.026215    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:18.037844    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:18.037859    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:18.051833    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:18.051848    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:18.088565    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:18.088574    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:18.100108    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:18.100120    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:18.117532    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:18.117546    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:18.129612    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:18.129622    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:18.133944    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:18.133950    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:18.167613    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:18.167627    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:18.192330    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:18.192341    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:18.203750    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:18.203762    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:18.221229    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:18.221242    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:18.261216    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:18.261230    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:18.275932    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:18.275946    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:18.291325    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:18.291336    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:17.364224    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:17.364391    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:17.376874    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:17.376945    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:17.387553    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:17.387626    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:17.398438    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:17.398511    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:17.409558    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:17.409633    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:17.427261    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:17.427330    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:17.438140    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:17.438203    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:17.448826    9411 logs.go:276] 0 containers: []
	W0717 11:09:17.448839    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:17.448899    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:17.459563    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:17.459582    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:17.459586    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:17.474527    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:17.474537    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:17.489296    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:17.489312    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:17.524568    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:17.524579    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:17.562294    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:17.562303    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:17.573701    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:17.573711    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:17.598439    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:17.598455    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:17.602935    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:17.602941    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:17.621566    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:17.621576    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:17.633816    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:17.633828    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:17.646091    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:17.646102    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:17.658233    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:17.658242    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:17.669541    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:17.669551    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:17.705663    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:17.705675    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:17.717330    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:17.717340    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:20.805388    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:20.231245    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:25.807619    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:25.807744    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:25.822040    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:25.822112    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:25.834036    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:25.834099    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:25.844346    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:25.844420    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:25.855027    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:25.855088    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:25.866081    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:25.866152    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:25.876842    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:25.876906    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:25.888995    9661 logs.go:276] 0 containers: []
	W0717 11:09:25.889006    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:25.889058    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:25.900011    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:25.900032    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:25.900038    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:25.911632    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:25.911643    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:25.916210    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:25.916216    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:25.955370    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:25.955383    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:25.966807    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:25.966820    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:26.003314    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:26.003327    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:26.017718    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:26.017729    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:26.032672    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:26.032686    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:26.044639    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:26.044650    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:26.061645    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:26.061655    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:26.074128    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:26.074141    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:26.086409    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:26.086418    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:26.123665    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:26.123675    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:26.137313    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:26.137322    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:26.156337    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:26.156348    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:26.168352    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:26.168362    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:25.233550    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:25.233767    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:25.255511    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:25.255618    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:25.270480    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:25.270558    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:25.283717    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:25.283781    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:25.295196    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:25.295271    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:25.305387    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:25.305455    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:25.316046    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:25.316110    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:25.326503    9411 logs.go:276] 0 containers: []
	W0717 11:09:25.326520    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:25.326577    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:25.337824    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:25.337842    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:25.337847    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:25.355995    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:25.356007    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:25.360836    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:25.360845    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:25.375034    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:25.375046    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:25.386606    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:25.386618    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:25.398581    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:25.398593    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:25.423841    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:25.423849    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:25.463033    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:25.463041    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:25.477417    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:25.477427    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:25.488674    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:25.488684    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:25.503591    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:25.503599    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:25.515888    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:25.515899    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:25.556951    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:25.556964    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:25.568646    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:25.568660    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:25.580205    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:25.580223    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:28.098196    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:28.692538    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:33.100559    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:33.100733    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:33.116596    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:33.116677    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:33.128233    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:33.128294    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:33.139340    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:33.139413    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:33.150135    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:33.150204    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:33.160896    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:33.160969    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:33.171505    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:33.171581    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:33.185956    9411 logs.go:276] 0 containers: []
	W0717 11:09:33.185967    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:33.186039    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:33.197035    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:33.197055    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:33.197061    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:33.215981    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:33.215991    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:33.233019    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:33.233029    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:33.258513    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:33.258521    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:33.296937    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:33.296948    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:33.312148    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:33.312158    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:33.323471    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:33.323484    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:33.335131    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:33.335141    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:33.346124    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:33.346135    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:33.350679    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:33.350687    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:33.365680    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:33.365690    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:33.381054    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:33.381065    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:33.396989    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:33.397001    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:33.409355    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:33.409365    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:33.447025    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:33.447036    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:33.694758    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:33.694843    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:33.708059    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:33.708131    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:33.724759    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:33.724823    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:33.734855    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:33.734923    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:33.748700    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:33.748769    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:33.759324    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:33.759388    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:33.770396    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:33.770466    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:33.780977    9661 logs.go:276] 0 containers: []
	W0717 11:09:33.780988    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:33.781046    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:33.791358    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:33.791377    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:33.791383    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:33.813468    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:33.813475    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:33.849922    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:33.849929    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:33.861078    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:33.861092    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:33.882611    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:33.882621    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:33.894015    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:33.894026    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:33.898943    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:33.898951    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:33.935436    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:33.935450    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:33.974617    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:33.974630    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:33.988538    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:33.988549    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:34.003065    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:34.003076    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:34.016083    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:34.016093    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:34.028576    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:34.028587    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:34.042610    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:34.042621    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:34.054338    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:34.054351    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:34.069617    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:34.069628    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:36.583494    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:35.959532    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:41.585787    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:41.585881    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:41.604887    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:41.604955    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:41.616747    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:41.616816    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:41.628948    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:41.629008    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:41.644681    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:41.644751    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:41.656100    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:41.656165    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:41.667927    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:41.667996    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:41.678379    9661 logs.go:276] 0 containers: []
	W0717 11:09:41.678400    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:41.678456    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:41.689056    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:41.689073    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:41.689078    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:41.703297    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:41.703309    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:41.717093    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:41.717106    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:41.731252    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:41.731262    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:41.742792    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:41.742803    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:41.754237    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:41.754249    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:41.767628    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:41.767639    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:41.779549    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:41.779560    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:41.794373    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:41.794384    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:41.806023    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:41.806036    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:41.842715    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:41.842728    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:41.859955    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:41.859966    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:41.882649    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:41.882657    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:41.921872    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:41.921882    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:41.925787    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:41.925795    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:41.963019    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:41.963031    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:40.961846    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:40.961977    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:40.975329    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:40.975407    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:40.987020    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:40.987088    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:40.999075    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:40.999147    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:41.009563    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:41.009632    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:41.020277    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:41.020339    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:41.039480    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:41.039551    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:41.049753    9411 logs.go:276] 0 containers: []
	W0717 11:09:41.049769    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:41.049824    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:41.062798    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:41.062816    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:41.062821    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:41.067505    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:41.067512    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:41.086803    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:41.086815    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:41.123549    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:41.123557    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:41.134764    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:41.134777    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:41.152177    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:41.152189    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:41.176276    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:41.176284    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:41.215553    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:41.215568    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:41.230174    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:41.230184    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:41.241510    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:41.241521    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:41.253602    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:41.253615    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:41.265156    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:41.265166    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:41.276751    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:41.276762    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:41.288268    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:41.288277    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:41.303073    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:41.303084    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:44.478684    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:43.816879    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:49.480952    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:49.481084    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:49.492334    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:49.492414    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:49.503155    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:49.503222    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:49.513939    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:49.514001    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:49.524630    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:49.524700    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:49.535348    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:49.535406    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:49.545787    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:49.545860    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:49.558693    9661 logs.go:276] 0 containers: []
	W0717 11:09:49.558707    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:49.558766    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:49.574618    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:49.574634    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:49.574640    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:49.611998    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:49.612008    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:49.626808    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:49.626819    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:49.638847    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:49.638858    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:49.662899    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:49.662910    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:49.684999    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:49.685009    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:49.700956    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:49.700969    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:49.714722    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:49.714731    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:49.728997    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:49.729010    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:49.740410    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:49.740423    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:49.755073    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:49.755085    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:49.794420    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:49.794427    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:49.799065    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:49.799073    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:49.834267    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:49.834278    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:49.847385    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:49.847396    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:49.859129    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:49.859143    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:52.374581    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:48.819162    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:48.819324    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:48.832139    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:48.832219    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:48.850958    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:48.851022    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:48.861678    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:48.861747    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:48.872110    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:48.872169    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:48.882367    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:48.882439    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:48.892416    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:48.892474    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:48.902409    9411 logs.go:276] 0 containers: []
	W0717 11:09:48.902425    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:48.902479    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:48.912869    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:48.912887    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:48.912893    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:48.917459    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:48.917468    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:48.956914    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:48.956927    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:09:48.971327    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:48.971339    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:48.983078    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:48.983089    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:48.998585    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:48.998595    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:49.013544    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:49.013553    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:49.053937    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:49.053951    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:49.065975    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:49.065986    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:49.085690    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:49.085701    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:49.102395    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:49.102406    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:49.114266    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:49.114279    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:49.128717    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:49.128728    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:49.143949    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:49.143959    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:49.155970    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:49.155980    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:51.683558    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:57.377290    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:57.377538    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:57.402217    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:57.402314    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:57.420129    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:57.420211    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:57.433127    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:57.433189    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:57.444096    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:57.444169    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:57.454787    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:57.454852    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:57.465070    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:57.465139    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:57.475604    9661 logs.go:276] 0 containers: []
	W0717 11:09:57.475616    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:57.475668    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:57.491470    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:57.491488    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:57.491493    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:57.506195    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:57.506208    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:57.519576    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:57.519589    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:57.534270    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:57.534284    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:57.551641    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:57.551652    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:57.564395    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:57.564407    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:57.586875    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:57.586885    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:57.599760    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:57.599770    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:57.613700    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:57.613714    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:57.649336    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:57.649352    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:57.688623    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:57.688633    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:57.728243    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:57.728254    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:57.740435    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:57.740447    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:57.754415    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:57.754429    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:57.765598    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:57.765613    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:57.776898    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:57.776907    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:56.685689    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:56.685841    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:56.704907    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:09:56.705010    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:56.721589    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:09:56.721657    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:56.733251    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:09:56.733327    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:56.743369    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:09:56.743441    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:56.753887    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:09:56.753951    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:56.763965    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:09:56.764033    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:56.774133    9411 logs.go:276] 0 containers: []
	W0717 11:09:56.774145    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:56.774204    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:56.784576    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:09:56.784594    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:09:56.784600    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:09:56.797143    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:09:56.797154    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:09:56.812992    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:56.813004    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:56.848526    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:56.848539    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:56.853196    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:09:56.853204    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:09:56.867588    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:09:56.867601    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:09:56.878915    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:09:56.878929    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:09:56.891559    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:09:56.891569    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:56.903399    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:56.903414    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:56.941999    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:09:56.942008    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:09:56.954381    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:09:56.954391    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:09:56.968773    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:09:56.968786    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:09:56.985712    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:09:56.985724    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:09:56.997788    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:56.997800    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:57.022278    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:09:57.022291    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:00.281899    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:59.538219    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:05.284162    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:05.284323    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:05.305628    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:10:05.305722    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:05.320896    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:10:05.320986    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:05.333695    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:10:05.333763    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:05.345332    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:10:05.345412    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:05.355835    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:10:05.355908    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:05.367090    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:10:05.367152    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:05.377299    9661 logs.go:276] 0 containers: []
	W0717 11:10:05.377310    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:05.377361    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:05.387881    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:10:05.387899    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:10:05.387904    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:10:05.404270    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:10:05.404282    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:10:05.422052    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:10:05.422062    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:10:05.433754    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:10:05.433764    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:10:05.471810    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:10:05.471824    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:10:05.487016    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:10:05.487030    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:10:05.501058    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:10:05.501068    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:10:05.512737    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:05.512746    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:05.535813    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:10:05.535820    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:05.547134    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:05.547143    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:05.551796    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:05.551805    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:05.586674    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:10:05.586689    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:10:05.605212    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:10:05.605226    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:10:05.638574    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:10:05.638585    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:10:05.652366    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:05.652379    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:05.691597    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:10:05.691608    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:10:08.207261    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:04.540474    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:04.540640    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:04.555634    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:04.555718    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:04.567781    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:04.567844    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:04.579042    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:04.579103    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:04.589543    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:04.589613    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:04.599982    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:04.600046    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:04.610917    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:04.610981    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:04.621935    9411 logs.go:276] 0 containers: []
	W0717 11:10:04.621946    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:04.622005    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:04.632533    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:04.632555    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:04.632560    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:04.644234    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:04.644245    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:04.656576    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:04.656587    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:04.676375    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:04.676386    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:04.692389    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:04.692398    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:04.718586    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:04.718599    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:04.733817    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:04.733828    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:04.748816    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:04.748827    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:04.788845    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:04.788857    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:04.800993    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:04.801005    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:04.818627    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:04.818636    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:04.823081    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:04.823086    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:04.834391    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:04.834401    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:04.846390    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:04.846406    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:04.858282    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:04.858292    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:07.396749    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:13.209592    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:13.209707    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:13.224773    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:10:13.224848    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:13.238477    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:10:13.238546    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:13.249004    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:10:13.249091    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:13.259248    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:10:13.259326    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:13.269332    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:10:13.269398    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:13.280174    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:10:13.280246    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:13.290599    9661 logs.go:276] 0 containers: []
	W0717 11:10:13.290614    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:13.290669    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:13.300633    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:10:13.300651    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:13.300656    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:13.339690    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:10:13.339700    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:10:13.377631    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:13.377645    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:13.381673    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:10:13.381682    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:10:13.396135    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:10:13.396147    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:10:13.413099    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:10:13.413111    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:10:13.424585    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:10:13.424598    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:10:13.436060    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:10:13.436072    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:13.449005    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:13.449016    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:13.482730    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:10:13.482742    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:10:12.399446    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:12.399965    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:12.435735    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:12.435873    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:12.456118    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:12.456207    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:12.471212    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:12.471294    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:12.483570    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:12.483638    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:12.494065    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:12.494135    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:12.504806    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:12.504874    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:12.517748    9411 logs.go:276] 0 containers: []
	W0717 11:10:12.517760    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:12.517812    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:12.528451    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:12.528471    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:12.528477    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:12.544628    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:12.544639    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:12.556484    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:12.556494    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:12.570751    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:12.570760    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:12.585015    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:12.585025    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:12.606271    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:12.606286    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:12.620056    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:12.620065    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:12.655866    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:12.655878    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:12.673410    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:12.673420    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:12.685568    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:12.685579    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:12.710249    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:12.710257    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:12.721742    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:12.721753    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:12.726239    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:12.726246    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:12.747116    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:12.747128    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:12.765860    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:12.765870    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:13.496417    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:13.496426    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:13.518124    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:10:13.518134    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:10:13.532519    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:10:13.532532    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:10:13.545148    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:10:13.545159    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:10:13.561075    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:10:13.561086    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:10:13.577985    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:10:13.577995    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:10:16.094355    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:15.305221    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:21.096551    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:21.096623    9661 kubeadm.go:597] duration metric: took 4m3.533320834s to restartPrimaryControlPlane
	W0717 11:10:21.096685    9661 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 11:10:21.096714    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 11:10:22.088271    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 11:10:22.093386    9661 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:10:22.096294    9661 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:10:22.099146    9661 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:10:22.099154    9661 kubeadm.go:157] found existing configuration files:
	
	I0717 11:10:22.099178    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/admin.conf
	I0717 11:10:22.102076    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:10:22.102095    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:10:22.104747    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/kubelet.conf
	I0717 11:10:22.107521    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:10:22.107546    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:10:22.110538    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/controller-manager.conf
	I0717 11:10:22.113582    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:10:22.113605    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:10:22.116292    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/scheduler.conf
	I0717 11:10:22.119085    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:10:22.119112    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:10:22.122130    9661 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 11:10:22.138176    9661 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0717 11:10:22.138205    9661 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 11:10:22.186956    9661 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 11:10:22.187020    9661 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 11:10:22.187079    9661 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 11:10:22.236757    9661 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 11:10:22.240928    9661 out.go:204]   - Generating certificates and keys ...
	I0717 11:10:22.240966    9661 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 11:10:22.241000    9661 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 11:10:22.241042    9661 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 11:10:22.241078    9661 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 11:10:22.241111    9661 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 11:10:22.241138    9661 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 11:10:22.241165    9661 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 11:10:22.241196    9661 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 11:10:22.241230    9661 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 11:10:22.241281    9661 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 11:10:22.241300    9661 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 11:10:22.241327    9661 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 11:10:22.339086    9661 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 11:10:22.473057    9661 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 11:10:22.521129    9661 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 11:10:22.562231    9661 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 11:10:22.591568    9661 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 11:10:22.592063    9661 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 11:10:22.592093    9661 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 11:10:22.677271    9661 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 11:10:22.681443    9661 out.go:204]   - Booting up control plane ...
	I0717 11:10:22.681522    9661 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 11:10:22.681670    9661 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 11:10:22.681767    9661 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 11:10:22.681872    9661 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 11:10:22.681997    9661 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 11:10:20.307477    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:20.307636    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:20.320889    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:20.320967    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:20.331949    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:20.332014    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:20.342225    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:20.342294    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:20.352747    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:20.352816    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:20.364425    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:20.364486    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:20.374563    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:20.374626    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:20.384900    9411 logs.go:276] 0 containers: []
	W0717 11:10:20.384913    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:20.384965    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:20.395325    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:20.395343    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:20.395350    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:20.431439    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:20.431453    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:20.446648    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:20.446659    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:20.470697    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:20.470703    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:20.475003    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:20.475011    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:20.489080    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:20.489091    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:20.507300    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:20.507314    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:20.524630    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:20.524640    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:20.536004    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:20.536017    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:20.550850    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:20.550863    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:20.562483    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:20.562492    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:20.575138    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:20.575151    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:20.587082    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:20.587092    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:20.624204    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:20.624214    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:20.636202    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:20.636213    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:23.150533    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:26.683261    9661 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001862 seconds
	I0717 11:10:26.683337    9661 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 11:10:26.687225    9661 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 11:10:27.200479    9661 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 11:10:27.200722    9661 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-018000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 11:10:27.703982    9661 kubeadm.go:310] [bootstrap-token] Using token: cpnl27.6prg557gnbcwpr9w
	I0717 11:10:27.707423    9661 out.go:204]   - Configuring RBAC rules ...
	I0717 11:10:27.707479    9661 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 11:10:27.707526    9661 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 11:10:27.710730    9661 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 11:10:27.711718    9661 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 11:10:27.712615    9661 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 11:10:27.713392    9661 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 11:10:27.716700    9661 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 11:10:27.891333    9661 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 11:10:28.111069    9661 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 11:10:28.111113    9661 kubeadm.go:310] 
	I0717 11:10:28.111277    9661 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 11:10:28.111286    9661 kubeadm.go:310] 
	I0717 11:10:28.111345    9661 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 11:10:28.111352    9661 kubeadm.go:310] 
	I0717 11:10:28.111364    9661 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 11:10:28.111394    9661 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 11:10:28.111425    9661 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 11:10:28.111432    9661 kubeadm.go:310] 
	I0717 11:10:28.111520    9661 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 11:10:28.111525    9661 kubeadm.go:310] 
	I0717 11:10:28.111552    9661 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 11:10:28.111555    9661 kubeadm.go:310] 
	I0717 11:10:28.111654    9661 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 11:10:28.111736    9661 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 11:10:28.111847    9661 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 11:10:28.111889    9661 kubeadm.go:310] 
	I0717 11:10:28.112016    9661 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 11:10:28.112076    9661 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 11:10:28.112080    9661 kubeadm.go:310] 
	I0717 11:10:28.112149    9661 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cpnl27.6prg557gnbcwpr9w \
	I0717 11:10:28.112219    9661 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:41cf485c9ce3ed322472481a1cde965b121021497c454b4b9fd17940d4869b14 \
	I0717 11:10:28.112246    9661 kubeadm.go:310] 	--control-plane 
	I0717 11:10:28.112251    9661 kubeadm.go:310] 
	I0717 11:10:28.112303    9661 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 11:10:28.112309    9661 kubeadm.go:310] 
	I0717 11:10:28.112350    9661 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cpnl27.6prg557gnbcwpr9w \
	I0717 11:10:28.112406    9661 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:41cf485c9ce3ed322472481a1cde965b121021497c454b4b9fd17940d4869b14 
	I0717 11:10:28.112495    9661 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 11:10:28.112565    9661 cni.go:84] Creating CNI manager for ""
	I0717 11:10:28.112574    9661 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:10:28.116594    9661 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 11:10:28.123672    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 11:10:28.126533    9661 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 11:10:28.131565    9661 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 11:10:28.131624    9661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-018000 minikube.k8s.io/updated_at=2024_07_17T11_10_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=stopped-upgrade-018000 minikube.k8s.io/primary=true
	I0717 11:10:28.131624    9661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 11:10:28.161452    9661 kubeadm.go:1113] duration metric: took 29.86525ms to wait for elevateKubeSystemPrivileges
	I0717 11:10:28.173572    9661 ops.go:34] apiserver oom_adj: -16
	I0717 11:10:28.173724    9661 kubeadm.go:394] duration metric: took 4m10.624153167s to StartCluster
	I0717 11:10:28.173741    9661 settings.go:142] acquiring lock: {Name:mk52ddc32cf249ba715452a288aa286713554b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:28.173835    9661 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:10:28.174241    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/kubeconfig: {Name:mk327624617af42cf4cf2f31d3ffb3402af5684d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:28.174469    9661 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:10:28.174559    9661 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:10:28.174495    9661 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 11:10:28.174587    9661 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-018000"
	I0717 11:10:28.174604    9661 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-018000"
	W0717 11:10:28.174608    9661 addons.go:243] addon storage-provisioner should already be in state true
	I0717 11:10:28.174620    9661 host.go:66] Checking if "stopped-upgrade-018000" exists ...
	I0717 11:10:28.174625    9661 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-018000"
	I0717 11:10:28.174641    9661 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-018000"
	I0717 11:10:28.175135    9661 retry.go:31] will retry after 985.009376ms: connect: dial unix /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/monitor: connect: connection refused
	I0717 11:10:28.175888    9661 kapi.go:59] client config for stopped-upgrade-018000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c47730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:10:28.176028    9661 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-018000"
	W0717 11:10:28.176033    9661 addons.go:243] addon default-storageclass should already be in state true
	I0717 11:10:28.176041    9661 host.go:66] Checking if "stopped-upgrade-018000" exists ...
	I0717 11:10:28.176666    9661 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 11:10:28.176671    9661 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 11:10:28.176677    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:10:28.178691    9661 out.go:177] * Verifying Kubernetes components...
	I0717 11:10:28.185626    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:28.287226    9661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:10:28.292976    9661 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:10:28.293027    9661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:28.297209    9661 api_server.go:72] duration metric: took 122.726709ms to wait for apiserver process to appear ...
	I0717 11:10:28.297219    9661 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:10:28.297228    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:28.341591    9661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 11:10:28.152801    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:28.152899    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:28.164505    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:28.164580    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:28.176520    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:28.176572    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:28.190995    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:28.191065    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:28.202384    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:28.202446    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:28.213199    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:28.213272    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:28.223702    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:28.223769    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:28.234438    9411 logs.go:276] 0 containers: []
	W0717 11:10:28.234450    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:28.234508    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:28.246288    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:28.246304    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:28.246308    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:28.284780    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:28.284799    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:28.321202    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:28.321217    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:28.333822    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:28.333833    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:28.338654    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:28.338667    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:28.355618    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:28.355628    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:28.368226    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:28.368236    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:28.387206    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:28.387218    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:28.412869    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:28.412887    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:28.429479    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:28.429495    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:28.445436    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:28.445447    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:28.458165    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:28.458180    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:28.477436    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:28.477449    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:28.490043    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:28.490055    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:28.501697    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:28.501709    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:29.167018    9661 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:29.171080    9661 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:10:29.171088    9661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 11:10:29.171096    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:10:29.209195    9661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:10:33.298088    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:33.298133    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:31.017681    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:38.298473    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:38.298498    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:36.019947    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:36.020111    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:36.037274    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:36.037347    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:36.052206    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:36.052303    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:36.074722    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:36.074787    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:36.085921    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:36.085987    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:36.096842    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:36.096915    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:36.107399    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:36.107466    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:36.117134    9411 logs.go:276] 0 containers: []
	W0717 11:10:36.117147    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:36.117199    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:36.127499    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:36.127521    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:36.127527    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:36.163873    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:36.163884    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:36.178428    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:36.178440    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:36.190741    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:36.190753    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:36.202899    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:36.202908    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:36.207385    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:36.207394    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:36.221506    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:36.221516    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:36.247093    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:36.247103    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:36.285558    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:36.285570    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:36.297747    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:36.297758    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:36.309851    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:36.309862    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:36.322209    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:36.322220    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:36.340962    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:36.340973    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:36.352783    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:36.352795    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:36.364738    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:36.364751    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:43.299222    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:43.299299    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:38.885511    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:48.299940    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:48.300018    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:43.887209    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:43.887306    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:43.898291    9411 logs.go:276] 1 containers: [fa3b51eefd92]
	I0717 11:10:43.898376    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:43.910061    9411 logs.go:276] 1 containers: [2af82a0b000a]
	I0717 11:10:43.910119    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:43.921080    9411 logs.go:276] 4 containers: [c93f1c12f933 8c40fe8a19ff a81614bf5b1d 54f9d88ef059]
	I0717 11:10:43.921139    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:43.939048    9411 logs.go:276] 1 containers: [59267f3dbded]
	I0717 11:10:43.939105    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:43.949237    9411 logs.go:276] 1 containers: [3aa34d7f7da3]
	I0717 11:10:43.949301    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:43.964969    9411 logs.go:276] 1 containers: [abf2dd75024e]
	I0717 11:10:43.965039    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:43.975755    9411 logs.go:276] 0 containers: []
	W0717 11:10:43.975768    9411 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:43.975833    9411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:43.988331    9411 logs.go:276] 1 containers: [2946e46e01e8]
	I0717 11:10:43.988350    9411 logs.go:123] Gathering logs for coredns [a81614bf5b1d] ...
	I0717 11:10:43.988356    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a81614bf5b1d"
	I0717 11:10:44.000735    9411 logs.go:123] Gathering logs for kube-apiserver [fa3b51eefd92] ...
	I0717 11:10:44.000745    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa3b51eefd92"
	I0717 11:10:44.014609    9411 logs.go:123] Gathering logs for coredns [8c40fe8a19ff] ...
	I0717 11:10:44.014623    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40fe8a19ff"
	I0717 11:10:44.025844    9411 logs.go:123] Gathering logs for storage-provisioner [2946e46e01e8] ...
	I0717 11:10:44.025855    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2946e46e01e8"
	I0717 11:10:44.037747    9411 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:44.037759    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:44.074995    9411 logs.go:123] Gathering logs for coredns [54f9d88ef059] ...
	I0717 11:10:44.075010    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54f9d88ef059"
	I0717 11:10:44.087024    9411 logs.go:123] Gathering logs for coredns [c93f1c12f933] ...
	I0717 11:10:44.087036    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93f1c12f933"
	I0717 11:10:44.102718    9411 logs.go:123] Gathering logs for container status ...
	I0717 11:10:44.102732    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:44.114811    9411 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:44.114826    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:44.152088    9411 logs.go:123] Gathering logs for etcd [2af82a0b000a] ...
	I0717 11:10:44.152102    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2af82a0b000a"
	I0717 11:10:44.166260    9411 logs.go:123] Gathering logs for kube-proxy [3aa34d7f7da3] ...
	I0717 11:10:44.166274    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa34d7f7da3"
	I0717 11:10:44.179526    9411 logs.go:123] Gathering logs for kube-controller-manager [abf2dd75024e] ...
	I0717 11:10:44.179539    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abf2dd75024e"
	I0717 11:10:44.198043    9411 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:44.198057    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:44.220817    9411 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:44.220825    9411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:44.225734    9411 logs.go:123] Gathering logs for kube-scheduler [59267f3dbded] ...
	I0717 11:10:44.225742    9411 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59267f3dbded"
	I0717 11:10:46.743740    9411 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:51.745998    9411 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:51.750576    9411 out.go:177] 
	W0717 11:10:51.754534    9411 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0717 11:10:51.754546    9411 out.go:239] * 
	W0717 11:10:51.755392    9411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:10:51.765452    9411 out.go:177] 
	I0717 11:10:53.300490    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:53.300525    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:58.301227    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:58.301266    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:10:58.679425    9661 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0717 11:10:58.683805    9661 out.go:177] * Enabled addons: storage-provisioner
	I0717 11:10:58.689716    9661 addons.go:510] duration metric: took 30.5154425s for enable addons: enabled=[storage-provisioner]
	I0717 11:11:03.302078    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:03.302120    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-07-17 18:02:01 UTC, ends at Wed 2024-07-17 18:11:07 UTC. --
	Jul 17 18:10:53 running-upgrade-462000 dockerd[3218]: time="2024-07-17T18:10:53.315540520Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cf48eedc6b05b1c054ac0734a9f00297bbe18b7450c3c071cd98c5df5048f6ef pid=18834 runtime=io.containerd.runc.v2
	Jul 17 18:10:53 running-upgrade-462000 dockerd[3218]: time="2024-07-17T18:10:53.317490966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 18:10:53 running-upgrade-462000 dockerd[3218]: time="2024-07-17T18:10:53.317619251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 18:10:53 running-upgrade-462000 dockerd[3218]: time="2024-07-17T18:10:53.317663416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 18:10:53 running-upgrade-462000 dockerd[3218]: time="2024-07-17T18:10:53.317738079Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ffade0d10b52d0c8a291eb74db33c9cf9917ff2a1dfbd78ab8598a7e12197c75 pid=18843 runtime=io.containerd.runc.v2
	Jul 17 18:10:54 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:54Z" level=error msg="ContainerStats resp: {0x4000115700 linux}"
	Jul 17 18:10:54 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:54Z" level=error msg="ContainerStats resp: {0x4000535800 linux}"
	Jul 17 18:10:54 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:54Z" level=error msg="ContainerStats resp: {0x4000528140 linux}"
	Jul 17 18:10:54 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:54Z" level=error msg="ContainerStats resp: {0x4000528800 linux}"
	Jul 17 18:10:54 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:54Z" level=error msg="ContainerStats resp: {0x40000b3240 linux}"
	Jul 17 18:10:54 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:54Z" level=error msg="ContainerStats resp: {0x40009d81c0 linux}"
	Jul 17 18:10:54 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:54Z" level=error msg="ContainerStats resp: {0x40009d8040 linux}"
	Jul 17 18:10:54 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:54Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 17 18:10:59 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:10:59Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 17 18:11:04 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 17 18:11:04 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:04Z" level=error msg="ContainerStats resp: {0x40008d4d80 linux}"
	Jul 17 18:11:04 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:04Z" level=error msg="ContainerStats resp: {0x40008d5700 linux}"
	Jul 17 18:11:05 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:05Z" level=error msg="ContainerStats resp: {0x4000929500 linux}"
	Jul 17 18:11:06 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:06Z" level=error msg="ContainerStats resp: {0x40008d0580 linux}"
	Jul 17 18:11:06 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:06Z" level=error msg="ContainerStats resp: {0x40008d0980 linux}"
	Jul 17 18:11:06 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:06Z" level=error msg="ContainerStats resp: {0x40008d0ec0 linux}"
	Jul 17 18:11:06 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:06Z" level=error msg="ContainerStats resp: {0x4000959980 linux}"
	Jul 17 18:11:06 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:06Z" level=error msg="ContainerStats resp: {0x40008d1740 linux}"
	Jul 17 18:11:06 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:06Z" level=error msg="ContainerStats resp: {0x40008d1b80 linux}"
	Jul 17 18:11:06 running-upgrade-462000 cri-dockerd[3060]: time="2024-07-17T18:11:06Z" level=error msg="ContainerStats resp: {0x400039b440 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ffade0d10b52d       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   d179eeaeaa78b
	cf48eedc6b05b       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   170cf4751b996
	c93f1c12f9338       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   d179eeaeaa78b
	8c40fe8a19ff3       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   170cf4751b996
	2946e46e01e85       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   5d369d0be8eeb
	3aa34d7f7da30       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   61e9e9c0d20b3
	fa3b51eefd922       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   375f7eab60535
	abf2dd75024e1       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   20451e9eb9e5e
	2af82a0b000a6       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   43e963322531d
	59267f3dbdeda       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   8b30b87b1410c
	
	
	==> coredns [8c40fe8a19ff] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:46144->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:37258->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:32845->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:53251->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:42384->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:41959->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:58795->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:57502->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:54787->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6613157187916844976.6837333185007257012. HINFO: read udp 10.244.0.3:38662->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c93f1c12f933] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:36632->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:33131->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:48944->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:52970->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:55607->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:45478->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:45030->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:33650->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:34282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6482716261322285450.8724758307885369998. HINFO: read udp 10.244.0.2:51318->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cf48eedc6b05] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4909424430910515342.7039085136703303623. HINFO: read udp 10.244.0.3:48333->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4909424430910515342.7039085136703303623. HINFO: read udp 10.244.0.3:39961->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ffade0d10b52] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5773986179542698237.4717529744256716694. HINFO: read udp 10.244.0.2:38798->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5773986179542698237.4717529744256716694. HINFO: read udp 10.244.0.2:60117->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-462000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-462000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=running-upgrade-462000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T11_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:06:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-462000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:11:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:06:50 +0000   Wed, 17 Jul 2024 18:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:06:50 +0000   Wed, 17 Jul 2024 18:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:06:50 +0000   Wed, 17 Jul 2024 18:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:06:50 +0000   Wed, 17 Jul 2024 18:06:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-462000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 1079521788fa4655b357cb36ffc25467
	  System UUID:                1079521788fa4655b357cb36ffc25467
	  Boot ID:                    8f83ac6a-de36-4fc7-ac6f-4d2bac90465a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5c6bp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-bdq45                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-462000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-462000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-462000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-qfds8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-462000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m22s)  kubelet          Node running-upgrade-462000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-462000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-462000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-462000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-462000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-462000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-462000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-462000 event: Registered Node running-upgrade-462000 in Controller
	
	
	==> dmesg <==
	[  +1.655278] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.061577] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.084383] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.136547] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085948] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.083345] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.797967] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[ +10.162966] systemd-fstab-generator[1932]: Ignoring "noauto" for root device
	[  +2.803268] systemd-fstab-generator[2209]: Ignoring "noauto" for root device
	[  +0.138610] systemd-fstab-generator[2242]: Ignoring "noauto" for root device
	[  +0.089175] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[  +0.100753] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +2.431619] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.185654] systemd-fstab-generator[3014]: Ignoring "noauto" for root device
	[  +0.082181] systemd-fstab-generator[3028]: Ignoring "noauto" for root device
	[  +0.088315] systemd-fstab-generator[3039]: Ignoring "noauto" for root device
	[  +0.071377] systemd-fstab-generator[3053]: Ignoring "noauto" for root device
	[  +2.176860] systemd-fstab-generator[3205]: Ignoring "noauto" for root device
	[  +3.669459] systemd-fstab-generator[3579]: Ignoring "noauto" for root device
	[  +0.987126] systemd-fstab-generator[3861]: Ignoring "noauto" for root device
	[ +18.514651] kauditd_printk_skb: 68 callbacks suppressed
	[Jul17 18:06] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.283804] systemd-fstab-generator[11915]: Ignoring "noauto" for root device
	[  +5.620733] systemd-fstab-generator[12521]: Ignoring "noauto" for root device
	[  +0.473066] systemd-fstab-generator[12653]: Ignoring "noauto" for root device
	
	
	==> etcd [2af82a0b000a] <==
	{"level":"info","ts":"2024-07-17T18:06:46.022Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:06:46.022Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:06:46.022Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-17T18:06:46.022Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-17T18:06:46.022Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-17T18:06:46.022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-17T18:06:46.022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-17T18:06:46.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T18:06:46.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:06:46.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-17T18:06:46.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:06:46.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-17T18:06:46.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:06:46.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-17T18:06:46.512Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:06:46.525Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:06:46.525Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:06:46.525Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:06:46.525Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-462000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:06:46.525Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:06:46.525Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:06:46.526Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-17T18:06:46.532Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T18:06:46.536Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:06:46.536Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:11:07 up 9 min,  0 users,  load average: 0.27, 0.42, 0.25
	Linux running-upgrade-462000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fa3b51eefd92] <==
	I0717 18:06:47.867112       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0717 18:06:47.881366       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0717 18:06:47.881444       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:06:47.881458       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:06:47.881463       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:06:47.882492       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0717 18:06:47.884284       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0717 18:06:48.617752       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 18:06:48.786131       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 18:06:48.788135       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 18:06:48.788255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:06:48.937188       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:06:48.947417       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:06:49.042260       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 18:06:49.044675       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0717 18:06:49.045051       1 controller.go:611] quota admission added evaluator for: endpoints
	I0717 18:06:49.046381       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 18:06:49.914580       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0717 18:06:50.582948       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0717 18:06:50.598668       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 18:06:50.608268       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0717 18:06:50.648396       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:07:03.170932       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0717 18:07:03.219705       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0717 18:07:04.315563       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [abf2dd75024e] <==
	W0717 18:07:03.230243       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-462000. Assuming now as a timestamp.
	I0717 18:07:03.230299       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0717 18:07:03.230356       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0717 18:07:03.230525       1 event.go:294] "Event occurred" object="running-upgrade-462000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-462000 event: Registered Node running-upgrade-462000 in Controller"
	I0717 18:07:03.234247       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0717 18:07:03.235770       1 shared_informer.go:262] Caches are synced for namespace
	I0717 18:07:03.235866       1 range_allocator.go:374] Set node running-upgrade-462000 PodCIDR to [10.244.0.0/24]
	I0717 18:07:03.236331       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5c6bp"
	I0717 18:07:03.239366       1 shared_informer.go:262] Caches are synced for stateful set
	I0717 18:07:03.240637       1 shared_informer.go:262] Caches are synced for TTL
	I0717 18:07:03.242358       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0717 18:07:03.263807       1 shared_informer.go:262] Caches are synced for crt configmap
	I0717 18:07:03.314151       1 shared_informer.go:262] Caches are synced for expand
	I0717 18:07:03.332450       1 shared_informer.go:262] Caches are synced for attach detach
	I0717 18:07:03.336686       1 shared_informer.go:262] Caches are synced for PV protection
	I0717 18:07:03.342544       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0717 18:07:03.364206       1 shared_informer.go:262] Caches are synced for persistent volume
	I0717 18:07:03.438749       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0717 18:07:03.442054       1 shared_informer.go:262] Caches are synced for disruption
	I0717 18:07:03.442088       1 disruption.go:371] Sending events to api server.
	I0717 18:07:03.444197       1 shared_informer.go:262] Caches are synced for resource quota
	I0717 18:07:03.472568       1 shared_informer.go:262] Caches are synced for resource quota
	I0717 18:07:03.858032       1 shared_informer.go:262] Caches are synced for garbage collector
	I0717 18:07:03.914438       1 shared_informer.go:262] Caches are synced for garbage collector
	I0717 18:07:03.914451       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [3aa34d7f7da3] <==
	I0717 18:07:04.291365       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0717 18:07:04.291391       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0717 18:07:04.291402       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0717 18:07:04.313147       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0717 18:07:04.313159       1 server_others.go:206] "Using iptables Proxier"
	I0717 18:07:04.313181       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0717 18:07:04.313287       1 server.go:661] "Version info" version="v1.24.1"
	I0717 18:07:04.313291       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:07:04.313530       1 config.go:317] "Starting service config controller"
	I0717 18:07:04.313537       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0717 18:07:04.313550       1 config.go:226] "Starting endpoint slice config controller"
	I0717 18:07:04.313552       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0717 18:07:04.314604       1 config.go:444] "Starting node config controller"
	I0717 18:07:04.314610       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0717 18:07:04.414312       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0717 18:07:04.414320       1 shared_informer.go:262] Caches are synced for service config
	I0717 18:07:04.414629       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [59267f3dbded] <==
	W0717 18:06:47.833578       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:06:47.833598       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:06:47.833661       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:06:47.833685       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:06:47.833732       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:06:47.833848       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:06:47.833777       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:06:47.833880       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:06:47.833794       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:06:47.833932       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:06:47.833805       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:06:47.833986       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:06:47.833748       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:06:47.834017       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:06:48.782955       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:06:48.783026       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:06:48.783167       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:06:48.783330       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:06:48.800691       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:06:48.800728       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:06:48.839002       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:06:48.839020       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:06:48.851623       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:06:48.851701       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0717 18:06:51.528112       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-07-17 18:02:01 UTC, ends at Wed 2024-07-17 18:11:08 UTC. --
	Jul 17 18:06:52 running-upgrade-462000 kubelet[12527]: E0717 18:06:52.820728   12527 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-462000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-462000"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.176273   12527 topology_manager.go:200] "Topology Admit Handler"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.243488   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwjjz\" (UniqueName: \"kubernetes.io/projected/ddb02fc1-b72f-43ce-9436-b39b2ab6737f-kube-api-access-wwjjz\") pod \"kube-proxy-qfds8\" (UID: \"ddb02fc1-b72f-43ce-9436-b39b2ab6737f\") " pod="kube-system/kube-proxy-qfds8"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.243512   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddb02fc1-b72f-43ce-9436-b39b2ab6737f-kube-proxy\") pod \"kube-proxy-qfds8\" (UID: \"ddb02fc1-b72f-43ce-9436-b39b2ab6737f\") " pod="kube-system/kube-proxy-qfds8"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.243521   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddb02fc1-b72f-43ce-9436-b39b2ab6737f-lib-modules\") pod \"kube-proxy-qfds8\" (UID: \"ddb02fc1-b72f-43ce-9436-b39b2ab6737f\") " pod="kube-system/kube-proxy-qfds8"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.243533   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddb02fc1-b72f-43ce-9436-b39b2ab6737f-xtables-lock\") pod \"kube-proxy-qfds8\" (UID: \"ddb02fc1-b72f-43ce-9436-b39b2ab6737f\") " pod="kube-system/kube-proxy-qfds8"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.243303   12527 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.243895   12527 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.260038   12527 topology_manager.go:200] "Topology Admit Handler"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: E0717 18:07:03.381351   12527 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: E0717 18:07:03.381371   12527 projected.go:192] Error preparing data for projected volume kube-api-access-wwjjz for pod kube-system/kube-proxy-qfds8: configmap "kube-root-ca.crt" not found
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: E0717 18:07:03.381405   12527 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/ddb02fc1-b72f-43ce-9436-b39b2ab6737f-kube-api-access-wwjjz podName:ddb02fc1-b72f-43ce-9436-b39b2ab6737f nodeName:}" failed. No retries permitted until 2024-07-17 18:07:03.881393684 +0000 UTC m=+13.309133077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwjjz" (UniqueName: "kubernetes.io/projected/ddb02fc1-b72f-43ce-9436-b39b2ab6737f-kube-api-access-wwjjz") pod "kube-proxy-qfds8" (UID: "ddb02fc1-b72f-43ce-9436-b39b2ab6737f") : configmap "kube-root-ca.crt" not found
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.444797   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c39140fe-ea76-4fd6-8b37-e3ac28eb14a1-tmp\") pod \"storage-provisioner\" (UID: \"c39140fe-ea76-4fd6-8b37-e3ac28eb14a1\") " pod="kube-system/storage-provisioner"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: I0717 18:07:03.444823   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bqn\" (UniqueName: \"kubernetes.io/projected/c39140fe-ea76-4fd6-8b37-e3ac28eb14a1-kube-api-access-s8bqn\") pod \"storage-provisioner\" (UID: \"c39140fe-ea76-4fd6-8b37-e3ac28eb14a1\") " pod="kube-system/storage-provisioner"
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: E0717 18:07:03.581392   12527 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: E0717 18:07:03.581412   12527 projected.go:192] Error preparing data for projected volume kube-api-access-s8bqn for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 17 18:07:03 running-upgrade-462000 kubelet[12527]: E0717 18:07:03.581445   12527 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/c39140fe-ea76-4fd6-8b37-e3ac28eb14a1-kube-api-access-s8bqn podName:c39140fe-ea76-4fd6-8b37-e3ac28eb14a1 nodeName:}" failed. No retries permitted until 2024-07-17 18:07:04.081433018 +0000 UTC m=+13.509172411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s8bqn" (UniqueName: "kubernetes.io/projected/c39140fe-ea76-4fd6-8b37-e3ac28eb14a1-kube-api-access-s8bqn") pod "storage-provisioner" (UID: "c39140fe-ea76-4fd6-8b37-e3ac28eb14a1") : configmap "kube-root-ca.crt" not found
	Jul 17 18:07:05 running-upgrade-462000 kubelet[12527]: I0717 18:07:05.151412   12527 topology_manager.go:200] "Topology Admit Handler"
	Jul 17 18:07:05 running-upgrade-462000 kubelet[12527]: I0717 18:07:05.155023   12527 topology_manager.go:200] "Topology Admit Handler"
	Jul 17 18:07:05 running-upgrade-462000 kubelet[12527]: I0717 18:07:05.255700   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-929qm\" (UniqueName: \"kubernetes.io/projected/c24210e6-15c2-4790-a246-d96ac016b701-kube-api-access-929qm\") pod \"coredns-6d4b75cb6d-5c6bp\" (UID: \"c24210e6-15c2-4790-a246-d96ac016b701\") " pod="kube-system/coredns-6d4b75cb6d-5c6bp"
	Jul 17 18:07:05 running-upgrade-462000 kubelet[12527]: I0717 18:07:05.255841   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c24210e6-15c2-4790-a246-d96ac016b701-config-volume\") pod \"coredns-6d4b75cb6d-5c6bp\" (UID: \"c24210e6-15c2-4790-a246-d96ac016b701\") " pod="kube-system/coredns-6d4b75cb6d-5c6bp"
	Jul 17 18:07:05 running-upgrade-462000 kubelet[12527]: I0717 18:07:05.356782   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szc95\" (UniqueName: \"kubernetes.io/projected/399eec71-cbab-4910-802b-5534a43768cd-kube-api-access-szc95\") pod \"coredns-6d4b75cb6d-bdq45\" (UID: \"399eec71-cbab-4910-802b-5534a43768cd\") " pod="kube-system/coredns-6d4b75cb6d-bdq45"
	Jul 17 18:07:05 running-upgrade-462000 kubelet[12527]: I0717 18:07:05.356919   12527 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/399eec71-cbab-4910-802b-5534a43768cd-config-volume\") pod \"coredns-6d4b75cb6d-bdq45\" (UID: \"399eec71-cbab-4910-802b-5534a43768cd\") " pod="kube-system/coredns-6d4b75cb6d-bdq45"
	Jul 17 18:10:53 running-upgrade-462000 kubelet[12527]: I0717 18:10:53.982361   12527 scope.go:110] "RemoveContainer" containerID="a81614bf5b1d11aa854f16cc948d26e0b51ead5a64eff51093926aa64d850870"
	Jul 17 18:10:54 running-upgrade-462000 kubelet[12527]: I0717 18:10:54.002951   12527 scope.go:110] "RemoveContainer" containerID="54f9d88ef05906e22f27e08e0936e2a4260f612ecd5f6f204933c7821372e71b"
	
	
	==> storage-provisioner [2946e46e01e8] <==
	I0717 18:07:04.368170       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:07:04.372583       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:07:04.372604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:07:04.375443       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:07:04.375536       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-462000_2b6066e5-a616-460d-b620-e7553c4faba5!
	I0717 18:07:04.376080       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a564f29a-26c6-429f-9cc0-92923a4559c0", APIVersion:"v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-462000_2b6066e5-a616-460d-b620-e7553c4faba5 became leader
	I0717 18:07:04.477124       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-462000_2b6066e5-a616-460d-b620-e7553c4faba5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-462000 -n running-upgrade-462000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-462000 -n running-upgrade-462000: exit status 2 (15.661052333s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-462000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-462000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-462000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-462000: (1.108433334s)
--- FAIL: TestRunningBinaryUpgrade (588.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-212000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-212000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.706724375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-212000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-212000" primary control-plane node in "kubernetes-upgrade-212000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-212000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:04:36.676927    9549 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:04:36.677095    9549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:36.677102    9549 out.go:304] Setting ErrFile to fd 2...
	I0717 11:04:36.677105    9549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:36.677223    9549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:04:36.678514    9549 out.go:298] Setting JSON to false
	I0717 11:04:36.696667    9549 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5644,"bootTime":1721233832,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:04:36.696788    9549 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:04:36.702171    9549 out.go:177] * [kubernetes-upgrade-212000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:04:36.709171    9549 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:04:36.709272    9549 notify.go:220] Checking for updates...
	I0717 11:04:36.716166    9549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:04:36.719168    9549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:04:36.722159    9549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:04:36.725176    9549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:04:36.728181    9549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:04:36.731697    9549 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:04:36.731765    9549 config.go:182] Loaded profile config "running-upgrade-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:04:36.731830    9549 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:04:36.736164    9549 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:04:36.743127    9549 start.go:297] selected driver: qemu2
	I0717 11:04:36.743134    9549 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:04:36.743141    9549 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:04:36.745359    9549 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:04:36.748064    9549 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:04:36.751218    9549 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 11:04:36.751233    9549 cni.go:84] Creating CNI manager for ""
	I0717 11:04:36.751240    9549 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 11:04:36.751281    9549 start.go:340] cluster config:
	{Name:kubernetes-upgrade-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:04:36.755296    9549 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:36.763141    9549 out.go:177] * Starting "kubernetes-upgrade-212000" primary control-plane node in "kubernetes-upgrade-212000" cluster
	I0717 11:04:36.766086    9549 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 11:04:36.766113    9549 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 11:04:36.766125    9549 cache.go:56] Caching tarball of preloaded images
	I0717 11:04:36.766206    9549 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:04:36.766213    9549 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 11:04:36.766283    9549 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/kubernetes-upgrade-212000/config.json ...
	I0717 11:04:36.766296    9549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/kubernetes-upgrade-212000/config.json: {Name:mk271ee1e5da4c804a34e1c5f3cd3750cef17bc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:04:36.766552    9549 start.go:360] acquireMachinesLock for kubernetes-upgrade-212000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:36.766587    9549 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "kubernetes-upgrade-212000"
	I0717 11:04:36.766597    9549 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:04:36.766634    9549 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:04:36.770185    9549 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:04:36.786357    9549 start.go:159] libmachine.API.Create for "kubernetes-upgrade-212000" (driver="qemu2")
	I0717 11:04:36.786392    9549 client.go:168] LocalClient.Create starting
	I0717 11:04:36.786472    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:04:36.786511    9549 main.go:141] libmachine: Decoding PEM data...
	I0717 11:04:36.786525    9549 main.go:141] libmachine: Parsing certificate...
	I0717 11:04:36.786574    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:04:36.786599    9549 main.go:141] libmachine: Decoding PEM data...
	I0717 11:04:36.786607    9549 main.go:141] libmachine: Parsing certificate...
	I0717 11:04:36.786983    9549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:04:36.919649    9549 main.go:141] libmachine: Creating SSH key...
	I0717 11:04:36.993127    9549 main.go:141] libmachine: Creating Disk image...
	I0717 11:04:36.993140    9549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:04:36.993343    9549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2
	I0717 11:04:37.003146    9549 main.go:141] libmachine: STDOUT: 
	I0717 11:04:37.003170    9549 main.go:141] libmachine: STDERR: 
	I0717 11:04:37.003225    9549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2 +20000M
	I0717 11:04:37.011370    9549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:04:37.011387    9549 main.go:141] libmachine: STDERR: 
	I0717 11:04:37.011403    9549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2
	I0717 11:04:37.011409    9549 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:04:37.011421    9549 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:37.011446    9549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:01:ab:4e:4a:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2
	I0717 11:04:37.013109    9549 main.go:141] libmachine: STDOUT: 
	I0717 11:04:37.013123    9549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:37.013144    9549 client.go:171] duration metric: took 226.749875ms to LocalClient.Create
	I0717 11:04:39.015368    9549 start.go:128] duration metric: took 2.248723292s to createHost
	I0717 11:04:39.015429    9549 start.go:83] releasing machines lock for "kubernetes-upgrade-212000", held for 2.248850583s
	W0717 11:04:39.015475    9549 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:39.025140    9549 out.go:177] * Deleting "kubernetes-upgrade-212000" in qemu2 ...
	W0717 11:04:39.047273    9549 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:39.047300    9549 start.go:729] Will try again in 5 seconds ...
	I0717 11:04:44.049527    9549 start.go:360] acquireMachinesLock for kubernetes-upgrade-212000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:44.050047    9549 start.go:364] duration metric: took 411µs to acquireMachinesLock for "kubernetes-upgrade-212000"
	I0717 11:04:44.050191    9549 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:04:44.050413    9549 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:04:44.059029    9549 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:04:44.105367    9549 start.go:159] libmachine.API.Create for "kubernetes-upgrade-212000" (driver="qemu2")
	I0717 11:04:44.105417    9549 client.go:168] LocalClient.Create starting
	I0717 11:04:44.105546    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:04:44.105608    9549 main.go:141] libmachine: Decoding PEM data...
	I0717 11:04:44.105625    9549 main.go:141] libmachine: Parsing certificate...
	I0717 11:04:44.105690    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:04:44.105736    9549 main.go:141] libmachine: Decoding PEM data...
	I0717 11:04:44.105747    9549 main.go:141] libmachine: Parsing certificate...
	I0717 11:04:44.106288    9549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:04:44.246446    9549 main.go:141] libmachine: Creating SSH key...
	I0717 11:04:44.297458    9549 main.go:141] libmachine: Creating Disk image...
	I0717 11:04:44.297464    9549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:04:44.297627    9549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2
	I0717 11:04:44.307846    9549 main.go:141] libmachine: STDOUT: 
	I0717 11:04:44.307868    9549 main.go:141] libmachine: STDERR: 
	I0717 11:04:44.307931    9549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2 +20000M
	I0717 11:04:44.316396    9549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:04:44.316413    9549 main.go:141] libmachine: STDERR: 
	I0717 11:04:44.316425    9549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2
	I0717 11:04:44.316430    9549 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:04:44.316442    9549 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:44.316463    9549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d4:85:15:08:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2
	I0717 11:04:44.318198    9549 main.go:141] libmachine: STDOUT: 
	I0717 11:04:44.318213    9549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:44.318225    9549 client.go:171] duration metric: took 212.803709ms to LocalClient.Create
	I0717 11:04:46.318375    9549 start.go:128] duration metric: took 2.267952375s to createHost
	I0717 11:04:46.318418    9549 start.go:83] releasing machines lock for "kubernetes-upgrade-212000", held for 2.268368833s
	W0717 11:04:46.318567    9549 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-212000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-212000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:46.327003    9549 out.go:177] 
	W0717 11:04:46.332923    9549 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:04:46.332940    9549 out.go:239] * 
	* 
	W0717 11:04:46.334116    9549 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:04:46.342887    9549 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-212000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-212000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-212000: (1.942834667s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-212000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-212000 status --format={{.Host}}: exit status 7 (49.936958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-212000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-212000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181715084s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-212000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-212000" primary control-plane node in "kubernetes-upgrade-212000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-212000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-212000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:04:48.377114    9586 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:04:48.377243    9586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:48.377246    9586 out.go:304] Setting ErrFile to fd 2...
	I0717 11:04:48.377249    9586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:04:48.377398    9586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:04:48.378375    9586 out.go:298] Setting JSON to false
	I0717 11:04:48.394458    9586 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5656,"bootTime":1721233832,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:04:48.394540    9586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:04:48.399766    9586 out.go:177] * [kubernetes-upgrade-212000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:04:48.406707    9586 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:04:48.406785    9586 notify.go:220] Checking for updates...
	I0717 11:04:48.412112    9586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:04:48.415653    9586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:04:48.418702    9586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:04:48.421694    9586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:04:48.424687    9586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:04:48.427940    9586 config.go:182] Loaded profile config "kubernetes-upgrade-212000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0717 11:04:48.428201    9586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:04:48.432672    9586 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:04:48.439657    9586 start.go:297] selected driver: qemu2
	I0717 11:04:48.439662    9586 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:04:48.439708    9586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:04:48.442030    9586 cni.go:84] Creating CNI manager for ""
	I0717 11:04:48.442048    9586 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:04:48.442079    9586 start.go:340] cluster config:
	{Name:kubernetes-upgrade-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-212000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:04:48.445531    9586 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:04:48.453620    9586 out.go:177] * Starting "kubernetes-upgrade-212000" primary control-plane node in "kubernetes-upgrade-212000" cluster
	I0717 11:04:48.457662    9586 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 11:04:48.457678    9586 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0717 11:04:48.457688    9586 cache.go:56] Caching tarball of preloaded images
	I0717 11:04:48.457742    9586 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:04:48.457747    9586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0717 11:04:48.457795    9586 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/kubernetes-upgrade-212000/config.json ...
	I0717 11:04:48.458140    9586 start.go:360] acquireMachinesLock for kubernetes-upgrade-212000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:48.458169    9586 start.go:364] duration metric: took 23.042µs to acquireMachinesLock for "kubernetes-upgrade-212000"
	I0717 11:04:48.458178    9586 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:04:48.458183    9586 fix.go:54] fixHost starting: 
	I0717 11:04:48.458294    9586 fix.go:112] recreateIfNeeded on kubernetes-upgrade-212000: state=Stopped err=<nil>
	W0717 11:04:48.458302    9586 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:04:48.465678    9586 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-212000" ...
	I0717 11:04:48.468678    9586 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:48.468725    9586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d4:85:15:08:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2
	I0717 11:04:48.470687    9586 main.go:141] libmachine: STDOUT: 
	I0717 11:04:48.470810    9586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:48.470836    9586 fix.go:56] duration metric: took 12.652167ms for fixHost
	I0717 11:04:48.470839    9586 start.go:83] releasing machines lock for "kubernetes-upgrade-212000", held for 12.665834ms
	W0717 11:04:48.470846    9586 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:04:48.470885    9586 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:48.470889    9586 start.go:729] Will try again in 5 seconds ...
	I0717 11:04:53.471298    9586 start.go:360] acquireMachinesLock for kubernetes-upgrade-212000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:04:53.471829    9586 start.go:364] duration metric: took 383.042µs to acquireMachinesLock for "kubernetes-upgrade-212000"
	I0717 11:04:53.471921    9586 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:04:53.471939    9586 fix.go:54] fixHost starting: 
	I0717 11:04:53.472653    9586 fix.go:112] recreateIfNeeded on kubernetes-upgrade-212000: state=Stopped err=<nil>
	W0717 11:04:53.472681    9586 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:04:53.477216    9586 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-212000" ...
	I0717 11:04:53.485236    9586 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:04:53.485459    9586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d4:85:15:08:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubernetes-upgrade-212000/disk.qcow2
	I0717 11:04:53.495273    9586 main.go:141] libmachine: STDOUT: 
	I0717 11:04:53.496024    9586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:04:53.496106    9586 fix.go:56] duration metric: took 24.169417ms for fixHost
	I0717 11:04:53.496121    9586 start.go:83] releasing machines lock for "kubernetes-upgrade-212000", held for 24.268625ms
	W0717 11:04:53.496370    9586 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-212000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-212000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:04:53.504248    9586 out.go:177] 
	W0717 11:04:53.507275    9586 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:04:53.507293    9586 out.go:239] * 
	* 
	W0717 11:04:53.509079    9586 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:04:53.517177    9586 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-212000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-212000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-212000 version --output=json: exit status 1 (63.648167ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-212000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-17 11:04:53.595219 -0700 PDT m=+950.035685917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-212000 -n kubernetes-upgrade-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-212000 -n kubernetes-upgrade-212000: exit status 7 (33.1365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-212000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-212000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-212000
--- FAIL: TestKubernetesUpgrade (17.05s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.28s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19283
- KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4195350497/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.28s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.23s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19283
- KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4039715840/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1538699070 start -p stopped-upgrade-018000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1538699070 start -p stopped-upgrade-018000 --memory=2200 --vm-driver=qemu2 : (41.553624167s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1538699070 -p stopped-upgrade-018000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1538699070 -p stopped-upgrade-018000 stop: (12.099323167s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-018000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-018000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.080465625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-018000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-018000" primary control-plane node in "stopped-upgrade-018000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-018000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:05:48.491603    9661 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:05:48.491756    9661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:48.491759    9661 out.go:304] Setting ErrFile to fd 2...
	I0717 11:05:48.491761    9661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:05:48.491897    9661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:05:48.492985    9661 out.go:298] Setting JSON to false
	I0717 11:05:48.510459    9661 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5716,"bootTime":1721233832,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:05:48.510523    9661 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:05:48.515255    9661 out.go:177] * [stopped-upgrade-018000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:05:48.522250    9661 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:05:48.522308    9661 notify.go:220] Checking for updates...
	I0717 11:05:48.529191    9661 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:05:48.532184    9661 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:05:48.535126    9661 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:05:48.538171    9661 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:05:48.541179    9661 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:05:48.542667    9661 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:05:48.546121    9661 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 11:05:48.549162    9661 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:05:48.553035    9661 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:05:48.560162    9661 start.go:297] selected driver: qemu2
	I0717 11:05:48.560167    9661 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51499 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:05:48.560210    9661 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:05:48.562919    9661 cni.go:84] Creating CNI manager for ""
	I0717 11:05:48.562934    9661 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:05:48.562962    9661 start.go:340] cluster config:
	{Name:stopped-upgrade-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51499 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:05:48.563011    9661 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:05:48.571143    9661 out.go:177] * Starting "stopped-upgrade-018000" primary control-plane node in "stopped-upgrade-018000" cluster
	I0717 11:05:48.575189    9661 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:05:48.575205    9661 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0717 11:05:48.575226    9661 cache.go:56] Caching tarball of preloaded images
	I0717 11:05:48.575294    9661 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:05:48.575299    9661 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0717 11:05:48.575355    9661 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/config.json ...
	I0717 11:05:48.575769    9661 start.go:360] acquireMachinesLock for stopped-upgrade-018000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:05:48.575802    9661 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "stopped-upgrade-018000"
	I0717 11:05:48.575809    9661 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:05:48.575814    9661 fix.go:54] fixHost starting: 
	I0717 11:05:48.575915    9661 fix.go:112] recreateIfNeeded on stopped-upgrade-018000: state=Stopped err=<nil>
	W0717 11:05:48.575922    9661 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:05:48.583196    9661 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-018000" ...
	I0717 11:05:48.587153    9661 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:05:48.587219    9661 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51465-:22,hostfwd=tcp::51466-:2376,hostname=stopped-upgrade-018000 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/disk.qcow2
	I0717 11:05:48.635185    9661 main.go:141] libmachine: STDOUT: 
	I0717 11:05:48.635217    9661 main.go:141] libmachine: STDERR: 
	I0717 11:05:48.635223    9661 main.go:141] libmachine: Waiting for VM to start (ssh -p 51465 docker@127.0.0.1)...
	I0717 11:06:09.192536    9661 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/config.json ...
	I0717 11:06:09.193366    9661 machine.go:94] provisionDockerMachine start ...
	I0717 11:06:09.193571    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.194165    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.194181    9661 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 11:06:09.289451    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 11:06:09.289486    9661 buildroot.go:166] provisioning hostname "stopped-upgrade-018000"
	I0717 11:06:09.289626    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.289883    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.289904    9661 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-018000 && echo "stopped-upgrade-018000" | sudo tee /etc/hostname
	I0717 11:06:09.370915    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-018000
	
	I0717 11:06:09.370996    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.371160    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.371173    9661 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-018000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-018000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-018000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 11:06:09.445833    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 11:06:09.445847    9661 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-6848/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-6848/.minikube}
	I0717 11:06:09.445856    9661 buildroot.go:174] setting up certificates
	I0717 11:06:09.445861    9661 provision.go:84] configureAuth start
	I0717 11:06:09.445865    9661 provision.go:143] copyHostCerts
	I0717 11:06:09.445958    9661 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem, removing ...
	I0717 11:06:09.445967    9661 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem
	I0717 11:06:09.446100    9661 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.pem (1082 bytes)
	I0717 11:06:09.446322    9661 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem, removing ...
	I0717 11:06:09.446327    9661 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem
	I0717 11:06:09.446961    9661 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/cert.pem (1123 bytes)
	I0717 11:06:09.447097    9661 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem, removing ...
	I0717 11:06:09.447101    9661 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem
	I0717 11:06:09.447161    9661 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-6848/.minikube/key.pem (1679 bytes)
	I0717 11:06:09.447291    9661 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-018000 san=[127.0.0.1 localhost minikube stopped-upgrade-018000]
	I0717 11:06:09.525636    9661 provision.go:177] copyRemoteCerts
	I0717 11:06:09.525665    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 11:06:09.525671    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:06:09.562712    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 11:06:09.570203    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 11:06:09.577327    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 11:06:09.583961    9661 provision.go:87] duration metric: took 138.097208ms to configureAuth
	I0717 11:06:09.583978    9661 buildroot.go:189] setting minikube options for container-runtime
	I0717 11:06:09.584078    9661 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:06:09.584111    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.584207    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.584211    9661 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 11:06:09.651900    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0717 11:06:09.651907    9661 buildroot.go:70] root file system type: tmpfs
	I0717 11:06:09.651958    9661 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 11:06:09.652004    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.652118    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.652153    9661 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 11:06:09.724215    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 11:06:09.724280    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:09.724399    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:09.724412    9661 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 11:06:10.095888    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0717 11:06:10.095902    9661 machine.go:97] duration metric: took 902.53125ms to provisionDockerMachine
	I0717 11:06:10.095910    9661 start.go:293] postStartSetup for "stopped-upgrade-018000" (driver="qemu2")
	I0717 11:06:10.095917    9661 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 11:06:10.095969    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 11:06:10.095979    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:06:10.133902    9661 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 11:06:10.135121    9661 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 11:06:10.135132    9661 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-6848/.minikube/addons for local assets ...
	I0717 11:06:10.135222    9661 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-6848/.minikube/files for local assets ...
	I0717 11:06:10.135351    9661 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem -> 73362.pem in /etc/ssl/certs
	I0717 11:06:10.135479    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 11:06:10.138480    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem --> /etc/ssl/certs/73362.pem (1708 bytes)
	I0717 11:06:10.145845    9661 start.go:296] duration metric: took 49.931334ms for postStartSetup
	I0717 11:06:10.145859    9661 fix.go:56] duration metric: took 21.570194917s for fixHost
	I0717 11:06:10.145896    9661 main.go:141] libmachine: Using SSH client type: native
	I0717 11:06:10.146000    9661 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b29b0] 0x1028b5210 <nil>  [] 0s} localhost 51465 <nil> <nil>}
	I0717 11:06:10.146009    9661 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 11:06:10.211409    9661 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239569.805862171
	
	I0717 11:06:10.211415    9661 fix.go:216] guest clock: 1721239569.805862171
	I0717 11:06:10.211419    9661 fix.go:229] Guest: 2024-07-17 11:06:09.805862171 -0700 PDT Remote: 2024-07-17 11:06:10.145861 -0700 PDT m=+21.677974917 (delta=-339.998829ms)
	I0717 11:06:10.211433    9661 fix.go:200] guest clock delta is within tolerance: -339.998829ms
	I0717 11:06:10.211436    9661 start.go:83] releasing machines lock for "stopped-upgrade-018000", held for 21.635780792s
	I0717 11:06:10.211498    9661 ssh_runner.go:195] Run: cat /version.json
	I0717 11:06:10.211507    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:06:10.211498    9661 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 11:06:10.211552    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	W0717 11:06:10.212089    9661 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51465: connect: connection refused
	I0717 11:06:10.212114    9661 retry.go:31] will retry after 295.619898ms: dial tcp [::1]:51465: connect: connection refused
	W0717 11:06:10.246755    9661 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 11:06:10.246806    9661 ssh_runner.go:195] Run: systemctl --version
	I0717 11:06:10.248774    9661 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 11:06:10.250557    9661 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 11:06:10.250583    9661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 11:06:10.253902    9661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 11:06:10.258640    9661 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 11:06:10.258647    9661 start.go:495] detecting cgroup driver to use...
	I0717 11:06:10.258726    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:06:10.265983    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0717 11:06:10.269484    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 11:06:10.272577    9661 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 11:06:10.272601    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 11:06:10.275627    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:06:10.278562    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 11:06:10.282017    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 11:06:10.285420    9661 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 11:06:10.288334    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 11:06:10.291153    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 11:06:10.294486    9661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 11:06:10.298104    9661 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 11:06:10.301262    9661 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 11:06:10.304171    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:10.382112    9661 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 11:06:10.392816    9661 start.go:495] detecting cgroup driver to use...
	I0717 11:06:10.392893    9661 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 11:06:10.397935    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:06:10.402245    9661 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 11:06:10.413491    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 11:06:10.418179    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:06:10.423250    9661 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 11:06:10.479405    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 11:06:10.484632    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 11:06:10.490329    9661 ssh_runner.go:195] Run: which cri-dockerd
	I0717 11:06:10.491745    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 11:06:10.494462    9661 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 11:06:10.499062    9661 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 11:06:10.581590    9661 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 11:06:10.655582    9661 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 11:06:10.655646    9661 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 11:06:10.662623    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:10.742597    9661 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:06:11.865368    9661 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.122765833s)
	I0717 11:06:11.865436    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 11:06:11.870769    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:06:11.875809    9661 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 11:06:11.947069    9661 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 11:06:12.028520    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:12.107440    9661 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 11:06:12.113746    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 11:06:12.118565    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:12.196663    9661 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 11:06:12.235393    9661 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 11:06:12.235486    9661 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 11:06:12.238277    9661 start.go:563] Will wait 60s for crictl version
	I0717 11:06:12.238327    9661 ssh_runner.go:195] Run: which crictl
	I0717 11:06:12.239529    9661 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 11:06:12.254618    9661 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0717 11:06:12.254703    9661 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:06:12.270797    9661 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 11:06:12.290661    9661 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0717 11:06:12.290728    9661 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0717 11:06:12.292515    9661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 11:06:12.296311    9661 kubeadm.go:883] updating cluster {Name:stopped-upgrade-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51499 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0717 11:06:12.296383    9661 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0717 11:06:12.296421    9661 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:06:12.311597    9661 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:06:12.311606    9661 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:06:12.311654    9661 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:06:12.314817    9661 ssh_runner.go:195] Run: which lz4
	I0717 11:06:12.316294    9661 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 11:06:12.317541    9661 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 11:06:12.317562    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0717 11:06:13.253695    9661 docker.go:649] duration metric: took 937.439083ms to copy over tarball
	I0717 11:06:13.253751    9661 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 11:06:14.413200    9661 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159444958s)
	I0717 11:06:14.413217    9661 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 11:06:14.429096    9661 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 11:06:14.432089    9661 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0717 11:06:14.437032    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:14.516590    9661 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 11:06:16.173493    9661 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.656896792s)
	I0717 11:06:16.173601    9661 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 11:06:16.185696    9661 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 11:06:16.185704    9661 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0717 11:06:16.185709    9661 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 11:06:16.191382    9661 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.193369    9661 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.195187    9661 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.195264    9661 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.197350    9661 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.197404    9661 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.198746    9661 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.198763    9661 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.200244    9661 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.200250    9661 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.201483    9661 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.201488    9661 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 11:06:16.203015    9661 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.203005    9661 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.203888    9661 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 11:06:16.204943    9661 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.607395    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.617831    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.620123    9661 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0717 11:06:16.620148    9661 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.620186    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0717 11:06:16.630814    9661 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0717 11:06:16.630851    9661 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.630907    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0717 11:06:16.631729    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0717 11:06:16.641196    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0717 11:06:16.647764    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.648506    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.657199    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.661784    9661 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0717 11:06:16.661797    9661 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0717 11:06:16.661803    9661 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.661807    9661 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.661847    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0717 11:06:16.661847    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0717 11:06:16.677797    9661 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0717 11:06:16.677824    9661 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.677877    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0717 11:06:16.680082    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0717 11:06:16.682363    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0717 11:06:16.682382    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0717 11:06:16.689411    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0717 11:06:16.693795    9661 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0717 11:06:16.693811    9661 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0717 11:06:16.693865    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0717 11:06:16.703993    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0717 11:06:16.704116    9661 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0717 11:06:16.705671    9661 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0717 11:06:16.705684    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0717 11:06:16.714126    9661 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0717 11:06:16.714137    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0717 11:06:16.726365    9661 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0717 11:06:16.726487    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.749699    9661 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0717 11:06:16.749751    9661 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0717 11:06:16.749768    9661 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.749820    9661 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0717 11:06:16.760323    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 11:06:16.760431    9661 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:06:16.761779    9661 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0717 11:06:16.761791    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0717 11:06:16.805476    9661 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0717 11:06:16.805492    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0717 11:06:16.846457    9661 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0717 11:06:16.852881    9661 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 11:06:16.853027    9661 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.864711    9661 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 11:06:16.864732    9661 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.864784    9661 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:06:16.878183    9661 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 11:06:16.878302    9661 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:06:16.879722    9661 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 11:06:16.879734    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0717 11:06:16.907145    9661 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 11:06:16.907167    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0717 11:06:17.146828    9661 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 11:06:17.146865    9661 cache_images.go:92] duration metric: took 961.156708ms to LoadCachedImages
	W0717 11:06:17.146910    9661 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0717 11:06:17.146916    9661 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0717 11:06:17.146981    9661 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-018000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 11:06:17.147045    9661 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 11:06:17.163689    9661 cni.go:84] Creating CNI manager for ""
	I0717 11:06:17.163703    9661 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:06:17.163707    9661 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 11:06:17.163716    9661 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-018000 NodeName:stopped-upgrade-018000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 11:06:17.163781    9661 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-018000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 11:06:17.163832    9661 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0717 11:06:17.166762    9661 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 11:06:17.166789    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 11:06:17.170014    9661 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 11:06:17.175278    9661 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 11:06:17.180097    9661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0717 11:06:17.185357    9661 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0717 11:06:17.186700    9661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 11:06:17.190596    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:06:17.269431    9661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:06:17.276470    9661 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000 for IP: 10.0.2.15
	I0717 11:06:17.276481    9661 certs.go:194] generating shared ca certs ...
	I0717 11:06:17.276490    9661 certs.go:226] acquiring lock for ca certs: {Name:mk50b621e3b03c5626e0e338e372bd26b7b413d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:17.276659    9661 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.key
	I0717 11:06:17.276715    9661 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.key
	I0717 11:06:17.276720    9661 certs.go:256] generating profile certs ...
	I0717 11:06:17.276814    9661 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.key
	I0717 11:06:17.276834    9661 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key.0811418c
	I0717 11:06:17.276845    9661 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt.0811418c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0717 11:06:17.422657    9661 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt.0811418c ...
	I0717 11:06:17.422670    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt.0811418c: {Name:mkab3957881c9d5f0f16ee6aed288ae575f57d0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:17.423228    9661 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key.0811418c ...
	I0717 11:06:17.423242    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key.0811418c: {Name:mk5eacf4c7de8eaeedb0e3634d3614958a122f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:17.423400    9661 certs.go:381] copying /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt.0811418c -> /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt
	I0717 11:06:17.423567    9661 certs.go:385] copying /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key.0811418c -> /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key
	I0717 11:06:17.423740    9661 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/proxy-client.key
	I0717 11:06:17.423875    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336.pem (1338 bytes)
	W0717 11:06:17.423908    9661 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336_empty.pem, impossibly tiny 0 bytes
	I0717 11:06:17.423914    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 11:06:17.423935    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem (1082 bytes)
	I0717 11:06:17.423955    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem (1123 bytes)
	I0717 11:06:17.423971    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/key.pem (1679 bytes)
	I0717 11:06:17.424009    9661 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem (1708 bytes)
	I0717 11:06:17.424322    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 11:06:17.431393    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 11:06:17.438592    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 11:06:17.445468    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 11:06:17.452556    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 11:06:17.459140    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 11:06:17.466310    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 11:06:17.473581    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 11:06:17.481333    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 11:06:17.488319    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/7336.pem --> /usr/share/ca-certificates/7336.pem (1338 bytes)
	I0717 11:06:17.495007    9661 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/ssl/certs/73362.pem --> /usr/share/ca-certificates/73362.pem (1708 bytes)
	I0717 11:06:17.501926    9661 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 11:06:17.507174    9661 ssh_runner.go:195] Run: openssl version
	I0717 11:06:17.509061    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73362.pem && ln -fs /usr/share/ca-certificates/73362.pem /etc/ssl/certs/73362.pem"
	I0717 11:06:17.512084    9661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73362.pem
	I0717 11:06:17.513501    9661 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:49 /usr/share/ca-certificates/73362.pem
	I0717 11:06:17.513522    9661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73362.pem
	I0717 11:06:17.515362    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73362.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 11:06:17.518589    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 11:06:17.521910    9661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:17.523377    9661 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:02 /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:17.523395    9661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 11:06:17.525252    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 11:06:17.528203    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7336.pem && ln -fs /usr/share/ca-certificates/7336.pem /etc/ssl/certs/7336.pem"
	I0717 11:06:17.530945    9661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7336.pem
	I0717 11:06:17.532621    9661 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:49 /usr/share/ca-certificates/7336.pem
	I0717 11:06:17.532646    9661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7336.pem
	I0717 11:06:17.534461    9661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7336.pem /etc/ssl/certs/51391683.0"
	I0717 11:06:17.537888    9661 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 11:06:17.539468    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 11:06:17.541711    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 11:06:17.543804    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 11:06:17.545852    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 11:06:17.547672    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 11:06:17.549484    9661 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 11:06:17.551321    9661 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-018000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51499 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-018000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0717 11:06:17.551387    9661 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:06:17.561806    9661 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 11:06:17.564985    9661 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 11:06:17.564993    9661 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 11:06:17.565017    9661 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 11:06:17.567737    9661 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 11:06:17.568048    9661 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-018000" does not appear in /Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:06:17.568148    9661 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-6848/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-018000" cluster setting kubeconfig missing "stopped-upgrade-018000" context setting]
	I0717 11:06:17.568351    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/kubeconfig: {Name:mk327624617af42cf4cf2f31d3ffb3402af5684d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:06:17.568814    9661 kapi.go:59] client config for stopped-upgrade-018000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c47730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:06:17.569160    9661 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 11:06:17.571842    9661 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-018000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0717 11:06:17.571848    9661 kubeadm.go:1160] stopping kube-system containers ...
	I0717 11:06:17.571887    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 11:06:17.582344    9661 docker.go:483] Stopping containers: [4550eb8b3005 9e1750a1a505 c74cc3d31c5c 342fc1ee8e0f f263e9f5bbf8 7dc850247de5 de75fc7f8d80 a2cd3facfb95]
	I0717 11:06:17.582411    9661 ssh_runner.go:195] Run: docker stop 4550eb8b3005 9e1750a1a505 c74cc3d31c5c 342fc1ee8e0f f263e9f5bbf8 7dc850247de5 de75fc7f8d80 a2cd3facfb95
	I0717 11:06:17.597219    9661 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 11:06:17.602722    9661 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:06:17.605983    9661 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:06:17.605990    9661 kubeadm.go:157] found existing configuration files:
	
	I0717 11:06:17.606016    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/admin.conf
	I0717 11:06:17.608997    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:06:17.609018    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:06:17.611513    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/kubelet.conf
	I0717 11:06:17.614224    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:06:17.614246    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:06:17.617330    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/controller-manager.conf
	I0717 11:06:17.619874    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:06:17.619894    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:06:17.622584    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/scheduler.conf
	I0717 11:06:17.625494    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:06:17.625520    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:06:17.628255    9661 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:06:17.630915    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:17.652466    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:18.032777    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:18.166452    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:18.189735    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 11:06:18.215208    9661 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:06:18.215290    9661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:18.716938    9661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:19.217358    9661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:06:19.222149    9661 api_server.go:72] duration metric: took 1.006949917s to wait for apiserver process to appear ...
	I0717 11:06:19.222158    9661 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:06:19.222168    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:24.223942    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:24.223980    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:29.224207    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:29.224259    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:34.224751    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:34.224827    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:39.225354    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:39.225404    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:44.226135    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:44.226198    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:49.226930    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:49.226983    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:54.227981    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:54.228030    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:06:59.229366    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:06:59.229404    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:04.231069    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:04.231103    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:09.231892    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:09.231930    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:14.234124    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:14.234149    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:19.236300    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:19.236444    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:19.251380    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:19.251467    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:19.263371    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:19.263442    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:19.274402    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:19.274469    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:19.284734    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:19.284804    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:19.295692    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:19.295760    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:19.306334    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:19.306401    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:19.316690    9661 logs.go:276] 0 containers: []
	W0717 11:07:19.316702    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:19.316763    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:19.326958    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:19.326979    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:19.326984    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:19.441845    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:19.441857    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:19.455807    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:19.455818    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:19.471244    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:19.471254    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:19.484219    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:19.484230    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:19.523149    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:19.523156    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:19.564122    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:19.564133    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:19.582611    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:19.582624    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:19.594161    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:19.594172    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:19.605928    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:19.605940    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:19.618496    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:19.618508    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:19.644027    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:19.644038    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:19.655876    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:19.655887    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:19.660274    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:19.660287    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:19.673959    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:19.673970    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:19.686013    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:19.686025    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:22.204038    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:27.206258    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:27.206453    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:27.223886    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:27.223962    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:27.237099    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:27.237180    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:27.248964    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:27.249034    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:27.263176    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:27.263243    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:27.273604    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:27.273678    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:27.284376    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:27.284455    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:27.294961    9661 logs.go:276] 0 containers: []
	W0717 11:07:27.294974    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:27.295029    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:27.306053    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:27.306072    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:27.306079    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:27.310733    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:27.310742    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:27.348686    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:27.348699    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:27.361991    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:27.362001    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:27.379379    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:27.379392    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:27.403639    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:27.403649    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:27.415130    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:27.415140    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:27.428307    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:27.428317    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:27.439778    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:27.439788    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:27.477608    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:27.477619    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:27.498919    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:27.498932    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:27.512960    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:27.512976    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:27.524924    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:27.524935    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:27.563474    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:27.563489    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:27.575319    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:27.575329    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:27.586931    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:27.586942    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:30.106758    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:35.109255    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:35.109443    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:35.126677    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:35.126778    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:35.140386    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:35.140464    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:35.151999    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:35.152074    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:35.162447    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:35.162522    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:35.175888    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:35.175959    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:35.187214    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:35.187283    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:35.197270    9661 logs.go:276] 0 containers: []
	W0717 11:07:35.197280    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:35.197339    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:35.207763    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:35.207783    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:35.207788    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:35.222005    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:35.222016    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:35.233963    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:35.233975    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:35.251986    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:35.251998    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:35.270556    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:35.270570    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:35.281995    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:35.282007    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:35.293532    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:35.293544    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:35.306832    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:35.306843    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:35.320924    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:35.320934    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:35.336154    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:35.336164    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:35.348473    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:35.348484    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:35.385521    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:35.385538    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:35.410144    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:35.410155    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:35.421830    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:35.421843    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:35.459381    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:35.459389    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:35.463267    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:35.463275    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:38.001397    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:43.002873    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:43.002989    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:43.020910    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:43.020984    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:43.032301    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:43.032380    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:43.042305    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:43.042377    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:43.052857    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:43.052927    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:43.063867    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:43.063931    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:43.074410    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:43.074475    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:43.085117    9661 logs.go:276] 0 containers: []
	W0717 11:07:43.085128    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:43.085180    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:43.095832    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:43.095853    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:43.095858    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:43.109991    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:43.110002    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:43.121924    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:43.121935    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:43.138812    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:43.138824    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:43.152920    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:43.152931    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:43.177798    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:43.177805    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:43.191880    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:43.191893    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:43.228749    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:43.228760    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:43.243892    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:43.243902    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:43.255388    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:43.255398    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:43.259675    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:43.259687    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:43.273871    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:43.273883    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:43.285548    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:43.285558    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:43.297265    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:43.297281    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:43.309584    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:43.309595    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:43.349177    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:43.349187    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:45.888666    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:50.891416    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:50.891591    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:50.911289    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:50.911390    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:50.927895    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:50.927969    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:50.939631    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:50.939704    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:50.951492    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:50.951570    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:50.962135    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:50.962201    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:50.974430    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:50.974501    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:50.986014    9661 logs.go:276] 0 containers: []
	W0717 11:07:50.986025    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:50.986084    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:51.003412    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:51.003430    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:51.003436    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:51.017085    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:51.017098    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:51.030093    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:51.030106    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:51.049313    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:51.049326    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:51.064624    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:51.064636    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:07:51.077650    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:51.077662    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:51.097318    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:51.097330    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:51.111781    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:51.111793    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:51.124022    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:51.124032    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:51.148896    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:51.148914    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:51.187375    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:51.187391    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:51.226049    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:51.226062    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:51.230762    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:51.230774    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:51.249097    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:51.249108    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:51.289803    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:51.289817    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:51.308806    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:51.308820    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:53.823030    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:07:58.825308    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:07:58.825498    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:07:58.843961    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:07:58.844051    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:07:58.857842    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:07:58.857919    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:07:58.871751    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:07:58.871818    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:07:58.883803    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:07:58.883882    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:07:58.894928    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:07:58.895001    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:07:58.906511    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:07:58.906587    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:07:58.917908    9661 logs.go:276] 0 containers: []
	W0717 11:07:58.917921    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:07:58.917988    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:07:58.929849    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:07:58.929869    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:07:58.929875    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:07:58.944337    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:07:58.944351    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:07:58.985729    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:07:58.985742    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:07:59.025896    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:07:59.025909    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:07:59.041442    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:07:59.041460    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:07:59.057753    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:07:59.057769    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:07:59.071711    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:07:59.071724    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:07:59.113095    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:07:59.113117    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:07:59.128066    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:07:59.128081    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:07:59.147374    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:07:59.147385    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:07:59.172976    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:07:59.172985    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:07:59.186071    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:07:59.186083    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:07:59.203218    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:07:59.203229    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:07:59.215620    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:07:59.215631    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:07:59.219752    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:07:59.219759    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:07:59.234975    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:07:59.234986    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:01.754964    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:06.757201    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:06.757257    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:06.772839    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:06.772908    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:06.784128    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:06.784176    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:06.795605    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:06.795659    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:06.806711    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:06.806757    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:06.817526    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:06.817563    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:06.828663    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:06.828699    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:06.840148    9661 logs.go:276] 0 containers: []
	W0717 11:08:06.840157    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:06.840211    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:06.858619    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:06.858635    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:06.858639    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:06.900083    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:06.900098    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:06.914904    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:06.914920    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:06.927305    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:06.927317    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:06.939801    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:06.939813    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:06.953935    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:06.953949    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:06.966984    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:06.966997    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:06.980356    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:06.980368    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:06.999027    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:06.999038    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:07.024510    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:07.024521    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:07.029779    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:07.029788    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:07.045768    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:07.045780    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:07.058032    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:07.058045    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:07.098155    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:07.098166    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:07.140617    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:07.140631    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:07.155014    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:07.155025    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:09.671906    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:14.674162    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:14.674235    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:14.686320    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:14.686397    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:14.708079    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:14.708146    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:14.719587    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:14.719654    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:14.731640    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:14.731711    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:14.743000    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:14.743070    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:14.754136    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:14.754204    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:14.764815    9661 logs.go:276] 0 containers: []
	W0717 11:08:14.764827    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:14.764885    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:14.776420    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:14.776436    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:14.776442    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:14.789037    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:14.789050    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:14.806511    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:14.806525    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:14.821580    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:14.821593    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:14.834511    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:14.834524    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:14.853921    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:14.853935    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:14.866714    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:14.866730    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:14.904279    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:14.904288    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:14.919386    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:14.919399    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:14.932517    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:14.932529    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:14.956826    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:14.956837    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:14.976244    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:14.976255    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:14.981002    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:14.981012    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:15.023758    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:15.023770    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:15.035811    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:15.035823    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:15.049361    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:15.049374    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:17.588001    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:22.590125    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:22.590214    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:22.602256    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:22.602335    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:22.613990    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:22.614061    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:22.627428    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:22.627494    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:22.643331    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:22.643396    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:22.654997    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:22.655072    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:22.666546    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:22.666621    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:22.677864    9661 logs.go:276] 0 containers: []
	W0717 11:08:22.677875    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:22.677938    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:22.689022    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:22.689044    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:22.689051    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:22.693628    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:22.693635    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:22.708450    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:22.708467    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:22.723402    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:22.723413    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:22.741916    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:22.741931    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:22.755020    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:22.755033    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:22.767551    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:22.767562    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:22.807919    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:22.807929    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:22.830958    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:22.830969    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:22.854087    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:22.854095    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:22.913360    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:22.913373    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:22.931964    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:22.931975    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:22.945500    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:22.945511    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:22.986336    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:22.986350    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:23.001176    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:23.001191    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:23.012540    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:23.012553    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:25.526531    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:30.528875    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:30.528973    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:30.540340    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:30.540408    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:30.556528    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:30.556601    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:30.567615    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:30.567685    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:30.579445    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:30.579520    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:30.590520    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:30.590586    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:30.602081    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:30.602144    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:30.613927    9661 logs.go:276] 0 containers: []
	W0717 11:08:30.613939    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:30.613996    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:30.625897    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:30.625913    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:30.625918    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:30.665177    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:30.665198    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:30.700687    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:30.700697    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:30.718223    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:30.718235    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:30.732060    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:30.732071    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:30.743997    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:30.744007    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:30.768212    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:30.768219    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:30.807159    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:30.807174    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:30.822814    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:30.822824    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:30.836303    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:30.836314    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:30.847683    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:30.847693    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:30.858982    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:30.858994    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:30.863278    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:30.863286    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:30.877060    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:30.877072    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:30.894665    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:30.894676    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:30.913366    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:30.913377    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:33.429756    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:38.431923    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:38.432076    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:38.445399    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:38.445473    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:38.456635    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:38.456706    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:38.467433    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:38.467503    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:38.478581    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:38.478659    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:38.491346    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:38.491438    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:38.503317    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:38.503387    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:38.513639    9661 logs.go:276] 0 containers: []
	W0717 11:08:38.513651    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:38.513711    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:38.524335    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:38.524353    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:38.524359    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:38.528616    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:38.528623    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:38.542499    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:38.542509    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:38.559808    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:38.559818    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:38.573315    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:38.573325    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:38.615397    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:38.615409    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:38.627921    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:38.627931    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:38.649318    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:38.649329    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:38.661485    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:38.661499    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:38.679706    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:38.679717    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:38.691547    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:38.691558    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:38.703200    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:38.703210    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:38.742528    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:38.742537    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:38.781014    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:38.781024    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:38.796136    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:38.796146    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:38.814173    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:38.814186    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:41.341348    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:46.343553    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:46.343670    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:46.354970    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:46.355044    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:46.367237    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:46.367306    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:46.377457    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:46.377527    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:46.388402    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:46.388476    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:46.403398    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:46.403475    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:46.414085    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:46.414150    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:46.424695    9661 logs.go:276] 0 containers: []
	W0717 11:08:46.424709    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:46.424769    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:46.435372    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:46.435391    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:46.435397    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:46.449350    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:46.449363    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:46.489377    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:46.489389    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:46.500884    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:46.500897    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:46.514720    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:46.514731    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:46.525982    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:46.525993    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:46.540207    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:46.540219    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:46.557432    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:46.557443    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:46.568792    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:46.568804    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:46.581757    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:46.581770    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:46.606930    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:46.606940    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:46.618848    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:46.618864    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:46.630477    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:46.630487    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:46.670814    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:46.670832    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:46.675471    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:46.675479    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:46.711567    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:46.711577    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:49.227447    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:08:54.228794    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:08:54.228926    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:08:54.242493    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:08:54.242558    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:08:54.255073    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:08:54.255142    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:08:54.265278    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:08:54.265344    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:08:54.275954    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:08:54.276026    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:08:54.286711    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:08:54.286771    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:08:54.297553    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:08:54.297621    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:08:54.307904    9661 logs.go:276] 0 containers: []
	W0717 11:08:54.307915    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:08:54.307972    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:08:54.321757    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:08:54.321776    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:08:54.321781    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:08:54.337139    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:08:54.337150    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:08:54.351870    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:08:54.351882    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:08:54.369400    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:08:54.369410    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:08:54.383619    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:08:54.383629    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:08:54.407196    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:08:54.407207    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:08:54.444676    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:08:54.444686    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:08:54.467941    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:08:54.467953    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:08:54.506665    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:08:54.506683    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:08:54.525117    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:08:54.525126    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:08:54.538100    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:08:54.538112    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:08:54.549352    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:08:54.549363    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:08:54.561395    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:08:54.561406    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:08:54.573472    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:08:54.573482    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:08:54.577738    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:08:54.577747    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:08:54.613323    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:08:54.613334    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:08:57.128514    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:02.131056    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:02.131207    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:02.146846    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:02.146923    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:02.159360    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:02.159436    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:02.170332    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:02.170398    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:02.180475    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:02.180545    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:02.190677    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:02.190742    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:02.201116    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:02.201182    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:02.211229    9661 logs.go:276] 0 containers: []
	W0717 11:09:02.211242    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:02.211298    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:02.221616    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:02.221634    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:02.221640    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:02.235638    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:02.235648    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:02.248955    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:02.248966    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:02.286583    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:02.286597    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:02.322110    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:02.322123    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:02.336604    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:02.336617    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:02.350759    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:02.350772    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:02.362799    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:02.362813    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:02.380458    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:02.380467    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:02.392261    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:02.392273    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:02.412029    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:02.412044    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:02.423813    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:02.423827    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:02.448353    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:02.448364    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:02.452310    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:02.452316    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:02.491655    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:02.491669    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:02.502955    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:02.502966    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:05.019895    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:10.022090    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:10.022244    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:10.037804    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:10.037873    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:10.049042    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:10.049114    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:10.059746    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:10.059811    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:10.070308    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:10.070369    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:10.080519    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:10.080586    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:10.096753    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:10.096825    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:10.106961    9661 logs.go:276] 0 containers: []
	W0717 11:09:10.106973    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:10.107030    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:10.121900    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:10.121919    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:10.121924    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:10.134505    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:10.134520    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:10.148721    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:10.148736    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:10.163527    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:10.163542    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:10.174852    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:10.174866    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:10.199098    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:10.199105    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:10.213633    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:10.213650    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:10.224586    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:10.224598    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:10.236349    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:10.236361    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:10.279143    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:10.279153    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:10.291161    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:10.291175    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:10.304438    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:10.304454    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:10.322679    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:10.322694    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:10.360318    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:10.360326    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:10.364222    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:10.364229    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:10.402707    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:10.402723    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:12.919333    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:17.921515    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:17.921621    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:17.933177    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:17.933257    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:17.943472    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:17.943543    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:17.953853    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:17.953923    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:17.964422    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:17.964488    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:17.975305    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:17.975367    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:17.985944    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:17.986015    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:17.996114    9661 logs.go:276] 0 containers: []
	W0717 11:09:17.996131    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:17.996187    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:18.006779    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:18.006795    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:18.006800    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:18.026202    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:18.026215    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:18.037844    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:18.037859    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:18.051833    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:18.051848    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:18.088565    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:18.088574    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:18.100108    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:18.100120    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:18.117532    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:18.117546    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:18.129612    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:18.129622    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:18.133944    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:18.133950    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:18.167613    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:18.167627    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:18.192330    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:18.192341    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:18.203750    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:18.203762    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:18.221229    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:18.221242    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:18.261216    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:18.261230    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:18.275932    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:18.275946    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:18.291325    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:18.291336    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:20.805388    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:25.807619    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:25.807744    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:25.822040    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:25.822112    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:25.834036    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:25.834099    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:25.844346    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:25.844420    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:25.855027    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:25.855088    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:25.866081    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:25.866152    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:25.876842    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:25.876906    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:25.888995    9661 logs.go:276] 0 containers: []
	W0717 11:09:25.889006    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:25.889058    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:25.900011    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:25.900032    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:25.900038    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:25.911632    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:25.911643    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:25.916210    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:25.916216    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:25.955370    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:25.955383    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:25.966807    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:25.966820    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:26.003314    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:26.003327    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:26.017718    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:26.017729    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:26.032672    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:26.032686    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:26.044639    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:26.044650    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:26.061645    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:26.061655    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:26.074128    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:26.074141    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:26.086409    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:26.086418    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:26.123665    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:26.123675    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:26.137313    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:26.137322    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:26.156337    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:26.156348    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:26.168352    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:26.168362    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:28.692538    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:33.694758    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:33.694843    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:33.708059    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:33.708131    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:33.724759    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:33.724823    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:33.734855    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:33.734923    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:33.748700    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:33.748769    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:33.759324    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:33.759388    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:33.770396    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:33.770466    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:33.780977    9661 logs.go:276] 0 containers: []
	W0717 11:09:33.780988    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:33.781046    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:33.791358    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:33.791377    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:33.791383    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:33.813468    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:33.813475    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:33.849922    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:33.849929    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:33.861078    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:33.861092    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:33.882611    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:33.882621    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:33.894015    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:33.894026    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:33.898943    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:33.898951    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:33.935436    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:33.935450    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:33.974617    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:33.974630    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:33.988538    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:33.988549    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:34.003065    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:34.003076    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:34.016083    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:34.016093    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:34.028576    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:34.028587    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:34.042610    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:34.042621    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:34.054338    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:34.054351    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:34.069617    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:34.069628    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:36.583494    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:41.585787    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:41.585881    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:41.604887    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:41.604955    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:41.616747    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:41.616816    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:41.628948    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:41.629008    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:41.644681    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:41.644751    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:41.656100    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:41.656165    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:41.667927    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:41.667996    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:41.678379    9661 logs.go:276] 0 containers: []
	W0717 11:09:41.678400    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:41.678456    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:41.689056    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:41.689073    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:41.689078    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:41.703297    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:41.703309    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:41.717093    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:41.717106    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:41.731252    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:41.731262    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:41.742792    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:41.742803    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:41.754237    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:41.754249    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:41.767628    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:41.767639    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:41.779549    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:41.779560    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:41.794373    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:41.794384    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:41.806023    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:41.806036    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:41.842715    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:41.842728    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:41.859955    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:41.859966    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:41.882649    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:41.882657    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:41.921872    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:41.921882    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:41.925787    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:41.925795    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:41.963019    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:41.963031    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:44.478684    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:49.480952    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:49.481084    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:49.492334    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:49.492414    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:49.503155    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:49.503222    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:49.513939    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:49.514001    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:49.524630    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:49.524700    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:49.535348    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:49.535406    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:49.545787    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:49.545860    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:49.558693    9661 logs.go:276] 0 containers: []
	W0717 11:09:49.558707    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:49.558766    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:49.574618    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:49.574634    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:49.574640    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:49.611998    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:49.612008    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:49.626808    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:49.626819    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:49.638847    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:49.638858    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:49.662899    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:49.662910    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:49.684999    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:49.685009    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:49.700956    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:49.700969    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:49.714722    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:49.714731    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:49.728997    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:49.729010    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:49.740410    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:49.740423    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:49.755073    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:49.755085    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:49.794420    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:49.794427    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:09:49.799065    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:49.799073    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:49.834267    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:49.834278    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:49.847385    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:49.847396    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:49.859129    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:49.859143    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:52.374581    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:09:57.377290    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:09:57.377538    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:09:57.402217    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:09:57.402314    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:09:57.420129    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:09:57.420211    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:09:57.433127    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:09:57.433189    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:09:57.444096    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:09:57.444169    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:09:57.454787    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:09:57.454852    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:09:57.465070    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:09:57.465139    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:09:57.475604    9661 logs.go:276] 0 containers: []
	W0717 11:09:57.475616    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:09:57.475668    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:09:57.491470    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:09:57.491488    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:09:57.491493    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:09:57.506195    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:09:57.506208    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:09:57.519576    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:09:57.519589    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:09:57.534270    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:09:57.534284    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:09:57.551641    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:09:57.551652    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:09:57.564395    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:09:57.564407    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:09:57.586875    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:09:57.586885    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:09:57.599760    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:09:57.599770    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:09:57.613700    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:09:57.613714    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:09:57.649336    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:09:57.649352    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:09:57.688623    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:09:57.688633    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:09:57.728243    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:09:57.728254    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:09:57.740435    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:09:57.740447    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:09:57.754415    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:09:57.754429    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:09:57.765598    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:09:57.765613    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:09:57.776898    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:09:57.776907    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:00.281899    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:05.284162    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:05.284323    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:05.305628    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:10:05.305722    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:05.320896    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:10:05.320986    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:05.333695    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:10:05.333763    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:05.345332    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:10:05.345412    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:05.355835    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:10:05.355908    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:05.367090    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:10:05.367152    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:05.377299    9661 logs.go:276] 0 containers: []
	W0717 11:10:05.377310    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:05.377361    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:05.387881    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:10:05.387899    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:10:05.387904    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:10:05.404270    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:10:05.404282    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:10:05.422052    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:10:05.422062    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:10:05.433754    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:10:05.433764    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:10:05.471810    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:10:05.471824    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:10:05.487016    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:10:05.487030    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:10:05.501058    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:10:05.501068    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:10:05.512737    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:05.512746    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:05.535813    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:10:05.535820    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:05.547134    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:05.547143    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:05.551796    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:05.551805    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:05.586674    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:10:05.586689    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:10:05.605212    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:10:05.605226    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:10:05.638574    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:10:05.638585    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:10:05.652366    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:05.652379    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:05.691597    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:10:05.691608    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:10:08.207261    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:13.209592    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:13.209707    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:10:13.224773    9661 logs.go:276] 2 containers: [94bc08fd372c c74cc3d31c5c]
	I0717 11:10:13.224848    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:10:13.238477    9661 logs.go:276] 2 containers: [c16c5b2059e6 f263e9f5bbf8]
	I0717 11:10:13.238546    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:10:13.249004    9661 logs.go:276] 1 containers: [a622dbc599b1]
	I0717 11:10:13.249091    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:10:13.259248    9661 logs.go:276] 2 containers: [d5844c3c2293 4550eb8b3005]
	I0717 11:10:13.259326    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:10:13.269332    9661 logs.go:276] 1 containers: [fd79f2b5bdfa]
	I0717 11:10:13.269398    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:10:13.280174    9661 logs.go:276] 2 containers: [4c8c8d2aa440 9e1750a1a505]
	I0717 11:10:13.280246    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:10:13.290599    9661 logs.go:276] 0 containers: []
	W0717 11:10:13.290614    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:10:13.290669    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:10:13.300633    9661 logs.go:276] 1 containers: [21188fab65b3]
	I0717 11:10:13.300651    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:10:13.300656    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:10:13.339690    9661 logs.go:123] Gathering logs for kube-apiserver [c74cc3d31c5c] ...
	I0717 11:10:13.339700    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c74cc3d31c5c"
	I0717 11:10:13.377631    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:10:13.377645    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:10:13.381673    9661 logs.go:123] Gathering logs for kube-apiserver [94bc08fd372c] ...
	I0717 11:10:13.381682    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94bc08fd372c"
	I0717 11:10:13.396135    9661 logs.go:123] Gathering logs for etcd [f263e9f5bbf8] ...
	I0717 11:10:13.396147    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f263e9f5bbf8"
	I0717 11:10:13.413099    9661 logs.go:123] Gathering logs for kube-proxy [fd79f2b5bdfa] ...
	I0717 11:10:13.413111    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd79f2b5bdfa"
	I0717 11:10:13.424585    9661 logs.go:123] Gathering logs for storage-provisioner [21188fab65b3] ...
	I0717 11:10:13.424598    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21188fab65b3"
	I0717 11:10:13.436060    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:10:13.436072    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:10:13.449005    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:10:13.449016    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:10:13.482730    9661 logs.go:123] Gathering logs for coredns [a622dbc599b1] ...
	I0717 11:10:13.482742    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a622dbc599b1"
	I0717 11:10:13.496417    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:10:13.496426    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:10:13.518124    9661 logs.go:123] Gathering logs for etcd [c16c5b2059e6] ...
	I0717 11:10:13.518134    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c16c5b2059e6"
	I0717 11:10:13.532519    9661 logs.go:123] Gathering logs for kube-scheduler [d5844c3c2293] ...
	I0717 11:10:13.532532    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5844c3c2293"
	I0717 11:10:13.545148    9661 logs.go:123] Gathering logs for kube-scheduler [4550eb8b3005] ...
	I0717 11:10:13.545159    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4550eb8b3005"
	I0717 11:10:13.561075    9661 logs.go:123] Gathering logs for kube-controller-manager [4c8c8d2aa440] ...
	I0717 11:10:13.561086    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c8c8d2aa440"
	I0717 11:10:13.577985    9661 logs.go:123] Gathering logs for kube-controller-manager [9e1750a1a505] ...
	I0717 11:10:13.577995    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1750a1a505"
	I0717 11:10:16.094355    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:21.096551    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:21.096623    9661 kubeadm.go:597] duration metric: took 4m3.533320834s to restartPrimaryControlPlane
	W0717 11:10:21.096685    9661 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 11:10:21.096714    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 11:10:22.088271    9661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 11:10:22.093386    9661 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 11:10:22.096294    9661 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 11:10:22.099146    9661 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 11:10:22.099154    9661 kubeadm.go:157] found existing configuration files:
	
	I0717 11:10:22.099178    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/admin.conf
	I0717 11:10:22.102076    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 11:10:22.102095    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 11:10:22.104747    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/kubelet.conf
	I0717 11:10:22.107521    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 11:10:22.107546    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 11:10:22.110538    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/controller-manager.conf
	I0717 11:10:22.113582    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 11:10:22.113605    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 11:10:22.116292    9661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/scheduler.conf
	I0717 11:10:22.119085    9661 kubeadm.go:163] "https://control-plane.minikube.internal:51499" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51499 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 11:10:22.119112    9661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 11:10:22.122130    9661 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 11:10:22.138176    9661 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0717 11:10:22.138205    9661 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 11:10:22.186956    9661 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 11:10:22.187020    9661 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 11:10:22.187079    9661 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 11:10:22.236757    9661 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 11:10:22.240928    9661 out.go:204]   - Generating certificates and keys ...
	I0717 11:10:22.240966    9661 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 11:10:22.241000    9661 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 11:10:22.241042    9661 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 11:10:22.241078    9661 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 11:10:22.241111    9661 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 11:10:22.241138    9661 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 11:10:22.241165    9661 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 11:10:22.241196    9661 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 11:10:22.241230    9661 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 11:10:22.241281    9661 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 11:10:22.241300    9661 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 11:10:22.241327    9661 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 11:10:22.339086    9661 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 11:10:22.473057    9661 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 11:10:22.521129    9661 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 11:10:22.562231    9661 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 11:10:22.591568    9661 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 11:10:22.592063    9661 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 11:10:22.592093    9661 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 11:10:22.677271    9661 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 11:10:22.681443    9661 out.go:204]   - Booting up control plane ...
	I0717 11:10:22.681522    9661 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 11:10:22.681670    9661 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 11:10:22.681767    9661 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 11:10:22.681872    9661 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 11:10:22.681997    9661 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 11:10:26.683261    9661 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001862 seconds
	I0717 11:10:26.683337    9661 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 11:10:26.687225    9661 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 11:10:27.200479    9661 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 11:10:27.200722    9661 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-018000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 11:10:27.703982    9661 kubeadm.go:310] [bootstrap-token] Using token: cpnl27.6prg557gnbcwpr9w
	I0717 11:10:27.707423    9661 out.go:204]   - Configuring RBAC rules ...
	I0717 11:10:27.707479    9661 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 11:10:27.707526    9661 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 11:10:27.710730    9661 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 11:10:27.711718    9661 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 11:10:27.712615    9661 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 11:10:27.713392    9661 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 11:10:27.716700    9661 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 11:10:27.891333    9661 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 11:10:28.111069    9661 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 11:10:28.111113    9661 kubeadm.go:310] 
	I0717 11:10:28.111277    9661 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 11:10:28.111286    9661 kubeadm.go:310] 
	I0717 11:10:28.111345    9661 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 11:10:28.111352    9661 kubeadm.go:310] 
	I0717 11:10:28.111364    9661 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 11:10:28.111394    9661 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 11:10:28.111425    9661 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 11:10:28.111432    9661 kubeadm.go:310] 
	I0717 11:10:28.111520    9661 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 11:10:28.111525    9661 kubeadm.go:310] 
	I0717 11:10:28.111552    9661 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 11:10:28.111555    9661 kubeadm.go:310] 
	I0717 11:10:28.111654    9661 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 11:10:28.111736    9661 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 11:10:28.111847    9661 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 11:10:28.111889    9661 kubeadm.go:310] 
	I0717 11:10:28.112016    9661 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 11:10:28.112076    9661 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 11:10:28.112080    9661 kubeadm.go:310] 
	I0717 11:10:28.112149    9661 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cpnl27.6prg557gnbcwpr9w \
	I0717 11:10:28.112219    9661 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:41cf485c9ce3ed322472481a1cde965b121021497c454b4b9fd17940d4869b14 \
	I0717 11:10:28.112246    9661 kubeadm.go:310] 	--control-plane 
	I0717 11:10:28.112251    9661 kubeadm.go:310] 
	I0717 11:10:28.112303    9661 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 11:10:28.112309    9661 kubeadm.go:310] 
	I0717 11:10:28.112350    9661 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cpnl27.6prg557gnbcwpr9w \
	I0717 11:10:28.112406    9661 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:41cf485c9ce3ed322472481a1cde965b121021497c454b4b9fd17940d4869b14 
	I0717 11:10:28.112495    9661 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 11:10:28.112565    9661 cni.go:84] Creating CNI manager for ""
	I0717 11:10:28.112574    9661 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:10:28.116594    9661 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 11:10:28.123672    9661 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 11:10:28.126533    9661 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 11:10:28.131565    9661 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 11:10:28.131624    9661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-018000 minikube.k8s.io/updated_at=2024_07_17T11_10_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=stopped-upgrade-018000 minikube.k8s.io/primary=true
	I0717 11:10:28.131624    9661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 11:10:28.161452    9661 kubeadm.go:1113] duration metric: took 29.86525ms to wait for elevateKubeSystemPrivileges
	I0717 11:10:28.173572    9661 ops.go:34] apiserver oom_adj: -16
	I0717 11:10:28.173724    9661 kubeadm.go:394] duration metric: took 4m10.624153167s to StartCluster
	I0717 11:10:28.173741    9661 settings.go:142] acquiring lock: {Name:mk52ddc32cf249ba715452a288aa286713554b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:28.173835    9661 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:10:28.174241    9661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/kubeconfig: {Name:mk327624617af42cf4cf2f31d3ffb3402af5684d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:10:28.174469    9661 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:10:28.174559    9661 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:10:28.174495    9661 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 11:10:28.174587    9661 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-018000"
	I0717 11:10:28.174604    9661 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-018000"
	W0717 11:10:28.174608    9661 addons.go:243] addon storage-provisioner should already be in state true
	I0717 11:10:28.174620    9661 host.go:66] Checking if "stopped-upgrade-018000" exists ...
	I0717 11:10:28.174625    9661 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-018000"
	I0717 11:10:28.174641    9661 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-018000"
	I0717 11:10:28.175135    9661 retry.go:31] will retry after 985.009376ms: connect: dial unix /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/monitor: connect: connection refused
	I0717 11:10:28.175888    9661 kapi.go:59] client config for stopped-upgrade-018000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/stopped-upgrade-018000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-6848/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c47730), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 11:10:28.176028    9661 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-018000"
	W0717 11:10:28.176033    9661 addons.go:243] addon default-storageclass should already be in state true
	I0717 11:10:28.176041    9661 host.go:66] Checking if "stopped-upgrade-018000" exists ...
	I0717 11:10:28.176666    9661 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 11:10:28.176671    9661 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 11:10:28.176677    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:10:28.178691    9661 out.go:177] * Verifying Kubernetes components...
	I0717 11:10:28.185626    9661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 11:10:28.287226    9661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 11:10:28.292976    9661 api_server.go:52] waiting for apiserver process to appear ...
	I0717 11:10:28.293027    9661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 11:10:28.297209    9661 api_server.go:72] duration metric: took 122.726709ms to wait for apiserver process to appear ...
	I0717 11:10:28.297219    9661 api_server.go:88] waiting for apiserver healthz status ...
	I0717 11:10:28.297228    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:28.341591    9661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 11:10:29.167018    9661 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 11:10:29.171080    9661 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:10:29.171088    9661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 11:10:29.171096    9661 sshutil.go:53] new ssh client: &{IP:localhost Port:51465 SSHKeyPath:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/stopped-upgrade-018000/id_rsa Username:docker}
	I0717 11:10:29.209195    9661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 11:10:33.298088    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:33.298133    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:38.298473    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:38.298498    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:43.299222    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:43.299299    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:48.299940    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:48.300018    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:53.300490    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:53.300525    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:10:58.301227    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:10:58.301266    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0717 11:10:58.679425    9661 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0717 11:10:58.683805    9661 out.go:177] * Enabled addons: storage-provisioner
	I0717 11:10:58.689716    9661 addons.go:510] duration metric: took 30.5154425s for enable addons: enabled=[storage-provisioner]
	I0717 11:11:03.302078    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:03.302120    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:08.303265    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:08.303280    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:13.303910    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:13.303961    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:18.305486    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:18.305507    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:23.307319    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:23.307369    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:28.309725    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:28.309925    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:28.325726    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:11:28.325804    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:28.338637    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:11:28.338717    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:28.359053    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:11:28.359115    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:28.369374    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:11:28.369432    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:28.379021    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:11:28.379085    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:28.389221    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:11:28.389274    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:28.399128    9661 logs.go:276] 0 containers: []
	W0717 11:11:28.399140    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:28.399197    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:28.409596    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:11:28.409613    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:28.409619    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:28.446570    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:11:28.446581    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:11:28.467428    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:11:28.467441    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:11:28.478663    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:11:28.478676    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:11:28.493827    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:11:28.493839    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:11:28.511173    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:28.511184    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:28.546472    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:28.546481    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:28.550530    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:11:28.550536    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:11:28.562511    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:11:28.562525    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:11:28.574201    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:28.574215    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:28.597306    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:11:28.597311    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:28.608965    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:11:28.608978    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:11:28.622614    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:11:28.622627    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:11:31.136371    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:36.137534    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:36.137986    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:36.176725    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:11:36.176852    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:36.198526    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:11:36.198628    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:36.214062    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:11:36.214139    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:36.229029    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:11:36.229096    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:36.239424    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:11:36.239490    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:36.249756    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:11:36.249822    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:36.259704    9661 logs.go:276] 0 containers: []
	W0717 11:11:36.259717    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:36.259772    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:36.270111    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:11:36.270126    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:11:36.270132    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:11:36.291504    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:11:36.291518    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:11:36.310448    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:36.310458    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:36.333876    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:36.333886    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:36.367599    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:11:36.367610    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:11:36.386421    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:11:36.386436    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:11:36.400970    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:11:36.400981    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:11:36.412957    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:11:36.412970    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:11:36.428459    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:11:36.428472    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:11:36.439690    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:11:36.439700    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:36.451350    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:36.451360    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:36.485909    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:36.485918    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:36.490473    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:11:36.490479    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:11:39.004324    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:44.006863    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:44.007097    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:44.032638    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:11:44.032743    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:44.049190    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:11:44.049257    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:44.062466    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:11:44.062524    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:44.075773    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:11:44.075827    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:44.085616    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:11:44.085681    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:44.096192    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:11:44.096257    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:44.106028    9661 logs.go:276] 0 containers: []
	W0717 11:11:44.106039    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:44.106088    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:44.116282    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:11:44.116297    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:44.116302    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:44.120750    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:11:44.120755    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:11:44.139986    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:11:44.139999    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:11:44.151441    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:11:44.151451    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:11:44.163031    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:11:44.163043    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:11:44.178137    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:11:44.178149    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:11:44.195141    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:44.195150    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:44.219560    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:44.219566    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:44.254723    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:44.254729    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:44.290383    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:11:44.290394    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:11:44.304245    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:11:44.304254    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:11:44.315856    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:11:44.315869    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:11:44.327602    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:11:44.327615    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:46.841549    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:51.843749    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:51.844101    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:51.878399    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:11:51.878541    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:51.897930    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:11:51.898009    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:51.912131    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:11:51.912189    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:51.928974    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:11:51.929032    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:51.938902    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:11:51.938969    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:51.950580    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:11:51.950643    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:51.960568    9661 logs.go:276] 0 containers: []
	W0717 11:11:51.960578    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:51.960636    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:51.972444    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:11:51.972460    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:51.972465    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:52.006401    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:52.006409    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:52.010717    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:52.010726    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:52.044083    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:11:52.044096    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:11:52.055797    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:11:52.055808    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:11:52.070445    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:11:52.070456    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:11:52.087754    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:11:52.087765    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:11:52.106198    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:11:52.106211    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:11:52.121445    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:11:52.121457    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:11:52.135588    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:11:52.135598    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:11:52.147122    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:11:52.147132    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:11:52.158429    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:52.158441    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:52.184174    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:11:52.184182    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:54.697598    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:11:59.699981    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:11:59.700356    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:11:59.729865    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:11:59.729979    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:11:59.748356    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:11:59.748445    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:11:59.762924    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:11:59.762996    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:11:59.775163    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:11:59.775230    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:11:59.785758    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:11:59.785830    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:11:59.796305    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:11:59.796368    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:11:59.806220    9661 logs.go:276] 0 containers: []
	W0717 11:11:59.806234    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:11:59.806278    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:11:59.816680    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:11:59.816695    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:11:59.816700    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:11:59.830519    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:11:59.830532    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:11:59.841688    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:11:59.841701    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:11:59.860734    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:11:59.860747    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:11:59.872413    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:11:59.872423    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:11:59.896875    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:11:59.896882    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:11:59.932900    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:11:59.932909    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:11:59.937431    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:11:59.937440    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:11:59.975888    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:11:59.975900    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:11:59.988018    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:11:59.988028    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:11:59.999820    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:11:59.999829    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:12:00.015049    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:12:00.015062    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:12:00.026888    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:12:00.026902    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:12:02.541768    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:07.544488    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:07.544862    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:07.584284    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:12:07.584427    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:07.605843    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:12:07.605950    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:07.620912    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:12:07.620988    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:07.633674    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:12:07.633736    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:07.644439    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:12:07.644512    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:07.655199    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:12:07.655263    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:07.667961    9661 logs.go:276] 0 containers: []
	W0717 11:12:07.667976    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:07.668036    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:07.678502    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:12:07.678517    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:07.678522    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:07.682997    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:12:07.683007    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:12:07.697605    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:12:07.697616    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:12:07.712194    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:12:07.712206    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:12:07.731538    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:12:07.731548    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:07.742539    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:12:07.742553    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:12:07.760753    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:12:07.760764    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:12:07.775920    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:07.775934    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:07.800846    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:07.800856    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:07.834139    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:07.834146    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:07.874432    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:12:07.874445    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:12:07.887452    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:12:07.887461    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:12:07.903156    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:12:07.903169    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:12:10.417649    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:15.420357    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:15.420648    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:15.451196    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:12:15.451327    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:15.470641    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:12:15.470718    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:15.483974    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:12:15.484041    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:15.494925    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:12:15.494991    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:15.505428    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:12:15.505494    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:15.515837    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:12:15.515896    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:15.526160    9661 logs.go:276] 0 containers: []
	W0717 11:12:15.526170    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:15.526222    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:15.538006    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:12:15.538020    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:15.538026    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:15.576405    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:12:15.576416    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:12:15.590689    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:12:15.590700    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:12:15.603109    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:12:15.603121    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:12:15.618194    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:12:15.618204    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:12:15.635285    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:12:15.635295    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:12:15.646994    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:15.647004    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:15.670286    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:15.670294    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:15.704084    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:15.704090    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:15.707981    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:12:15.707990    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:12:15.721639    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:12:15.721647    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:12:15.733869    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:12:15.733878    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:12:15.745768    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:12:15.745781    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:18.265916    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:23.268471    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:23.268849    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:23.310804    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:12:23.310951    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:23.334746    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:12:23.334838    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:23.353660    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:12:23.353738    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:23.366269    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:12:23.366434    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:23.377018    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:12:23.377075    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:23.387071    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:12:23.387135    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:23.397542    9661 logs.go:276] 0 containers: []
	W0717 11:12:23.397554    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:23.397600    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:23.407712    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:12:23.407729    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:12:23.407734    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:12:23.427116    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:12:23.427127    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:12:23.440234    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:12:23.440249    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:12:23.460054    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:23.460068    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:23.495404    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:23.495413    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:23.499464    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:23.499470    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:23.540425    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:12:23.540441    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:12:23.559859    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:12:23.559868    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:12:23.573992    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:12:23.574005    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:12:23.585477    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:12:23.585492    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:12:23.600203    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:12:23.600214    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:12:23.613774    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:23.613787    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:23.637115    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:12:23.637122    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:26.149051    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:31.151421    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:31.151656    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:31.173839    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:12:31.173927    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:31.187303    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:12:31.187369    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:31.199446    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:12:31.199515    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:31.210111    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:12:31.210176    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:31.221064    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:12:31.221128    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:31.231351    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:12:31.231412    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:31.242056    9661 logs.go:276] 0 containers: []
	W0717 11:12:31.242066    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:31.242115    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:31.253118    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:12:31.253134    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:31.253139    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:31.257754    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:31.257763    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:31.292150    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:12:31.292161    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:12:31.306592    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:12:31.306604    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:12:31.319252    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:12:31.319266    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:12:31.335326    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:12:31.335339    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:12:31.352039    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:12:31.352049    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:12:31.363500    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:31.363510    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:31.398866    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:12:31.398877    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:12:31.412698    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:12:31.412710    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:12:31.424772    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:12:31.424784    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:12:31.436356    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:31.436367    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:31.461283    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:12:31.461293    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:33.975190    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:38.977879    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:38.978279    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:39.009690    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:12:39.009806    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:39.028776    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:12:39.028858    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:39.043954    9661 logs.go:276] 2 containers: [63c563bfc16d 99779a7a3777]
	I0717 11:12:39.044018    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:39.057917    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:12:39.057985    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:39.068833    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:12:39.068900    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:39.079347    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:12:39.079403    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:39.089734    9661 logs.go:276] 0 containers: []
	W0717 11:12:39.089748    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:39.089805    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:39.104161    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:12:39.104177    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:12:39.104182    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:12:39.116049    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:12:39.116059    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:12:39.127766    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:12:39.127775    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:12:39.139326    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:39.139335    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:39.164751    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:12:39.164760    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:39.176048    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:39.176060    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:39.248644    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:12:39.248656    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:12:39.282726    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:12:39.282741    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:12:39.311221    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:12:39.311242    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:12:39.361480    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:12:39.361493    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:12:39.373632    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:12:39.373645    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:12:39.391226    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:39.391240    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:39.427311    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:39.427320    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:41.933524    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:46.934615    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:46.934669    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:46.945990    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:12:46.946053    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:46.958382    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:12:46.958434    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:46.970082    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:12:46.970144    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:46.980869    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:12:46.980916    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:46.992219    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:12:46.992288    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:47.004687    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:12:47.004745    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:47.015455    9661 logs.go:276] 0 containers: []
	W0717 11:12:47.015465    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:47.015516    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:47.026081    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:12:47.026099    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:12:47.026105    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:12:47.038203    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:12:47.038214    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:47.052277    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:47.052289    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:47.088335    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:12:47.088353    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:12:47.103436    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:12:47.103445    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:12:47.114994    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:12:47.115007    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:12:47.133667    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:12:47.133679    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:12:47.146416    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:47.146425    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:47.171050    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:47.171063    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:47.176022    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:47.176033    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:47.212821    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:12:47.212833    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:12:47.226152    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:12:47.226163    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:12:47.241961    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:12:47.241972    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:12:47.257198    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:12:47.257210    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:12:47.272631    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:12:47.272641    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:12:49.786946    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:12:54.789788    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:12:54.790203    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:12:54.830800    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:12:54.830953    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:12:54.853399    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:12:54.853467    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:12:54.868128    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:12:54.868189    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:12:54.888478    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:12:54.888558    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:12:54.906267    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:12:54.906337    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:12:54.918777    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:12:54.918855    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:12:54.931352    9661 logs.go:276] 0 containers: []
	W0717 11:12:54.931366    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:12:54.931433    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:12:54.944309    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:12:54.944329    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:12:54.944334    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:12:54.980614    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:12:54.980632    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:12:55.021139    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:12:55.021149    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:12:55.036466    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:12:55.036479    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:12:55.054812    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:12:55.054824    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:12:55.080856    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:12:55.080866    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:12:55.093019    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:12:55.093034    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:12:55.105224    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:12:55.105235    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:12:55.110365    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:12:55.110373    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:12:55.122185    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:12:55.122195    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:12:55.137372    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:12:55.137383    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:12:55.153942    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:12:55.153951    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:12:55.167272    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:12:55.167282    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:12:55.182544    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:12:55.182553    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:12:55.194412    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:12:55.194427    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:12:57.722016    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:02.723955    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:02.724409    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:02.763451    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:13:02.763577    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:02.785446    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:13:02.785543    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:02.800398    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:13:02.800473    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:02.812603    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:13:02.812668    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:02.824000    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:13:02.824069    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:02.834795    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:13:02.834866    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:02.848397    9661 logs.go:276] 0 containers: []
	W0717 11:13:02.848408    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:02.848466    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:02.859331    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:13:02.859349    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:02.859355    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:02.902518    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:13:02.902529    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:13:02.917072    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:13:02.917086    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:13:02.929185    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:02.929195    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:02.965052    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:13:02.965062    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:13:02.980511    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:13:02.980523    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:02.991668    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:02.991682    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:02.995945    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:13:02.995952    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:13:03.010133    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:13:03.010142    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:13:03.021589    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:13:03.021603    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:13:03.032725    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:13:03.032739    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:13:03.048312    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:03.048324    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:03.072195    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:13:03.072204    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:13:03.086813    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:13:03.086822    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:13:03.104754    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:13:03.104765    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:13:05.623664    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:10.624368    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:10.624448    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:10.636209    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:13:10.636277    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:10.648355    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:13:10.648423    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:10.661937    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:13:10.661994    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:10.674580    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:13:10.674627    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:10.688704    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:13:10.688755    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:10.700030    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:13:10.700088    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:10.711440    9661 logs.go:276] 0 containers: []
	W0717 11:13:10.711452    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:10.711495    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:10.722707    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:13:10.722721    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:13:10.722728    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:13:10.738166    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:13:10.738183    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:13:10.758914    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:13:10.758926    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:13:10.771886    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:10.771895    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:10.796214    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:10.796234    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:10.832850    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:10.832864    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:10.874370    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:13:10.874383    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:13:10.889962    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:13:10.889972    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:13:10.907424    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:13:10.907436    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:13:10.920365    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:13:10.920376    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:13:10.934386    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:13:10.934395    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:13:10.947360    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:13:10.947371    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:10.959079    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:10.959086    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:10.963470    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:13:10.963478    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:13:10.977142    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:13:10.977155    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:13:13.493954    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:18.495201    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:18.495409    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:18.514650    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:13:18.514745    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:18.529310    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:13:18.529386    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:18.541547    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:13:18.541621    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:18.553759    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:13:18.553825    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:18.563862    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:13:18.563926    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:18.574838    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:13:18.574903    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:18.585394    9661 logs.go:276] 0 containers: []
	W0717 11:13:18.585404    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:18.585453    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:18.600061    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:13:18.600078    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:13:18.600083    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:13:18.615144    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:18.615154    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:18.619650    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:18.619660    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:18.655648    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:13:18.655657    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:13:18.670299    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:18.670312    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:18.695949    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:18.695956    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:18.730998    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:13:18.731004    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:13:18.745475    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:13:18.745489    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:13:18.757180    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:13:18.757191    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:13:18.768640    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:13:18.768651    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:18.781370    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:13:18.781382    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:13:18.793203    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:13:18.793216    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:13:18.805401    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:13:18.805411    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:13:18.822769    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:13:18.822781    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:13:18.834838    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:13:18.834849    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:13:21.351315    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:26.353950    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:26.354107    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:26.369715    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:13:26.369785    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:26.381486    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:13:26.381553    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:26.393429    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:13:26.393503    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:26.404315    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:13:26.404384    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:26.415230    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:13:26.415295    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:26.427453    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:13:26.427524    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:26.437805    9661 logs.go:276] 0 containers: []
	W0717 11:13:26.437816    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:26.437872    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:26.447992    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:13:26.448008    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:13:26.448014    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:26.459310    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:13:26.459319    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:13:26.471122    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:13:26.471132    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:13:26.496287    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:26.496295    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:26.521323    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:13:26.521334    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:13:26.532445    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:13:26.532455    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:13:26.547602    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:13:26.547611    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:13:26.563190    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:26.563199    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:26.597650    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:26.597657    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:26.632370    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:13:26.632381    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:13:26.646524    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:13:26.646533    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:13:26.658366    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:13:26.658377    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:13:26.669910    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:26.669919    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:26.674500    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:13:26.674508    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:13:26.688530    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:13:26.688540    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:13:29.202314    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:34.204985    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:34.205156    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:34.216588    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:13:34.216648    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:34.227578    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:13:34.227635    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:34.240892    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:13:34.240945    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:34.251703    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:13:34.251769    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:34.263008    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:13:34.263076    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:34.275390    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:13:34.275444    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:34.286515    9661 logs.go:276] 0 containers: []
	W0717 11:13:34.286526    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:34.286569    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:34.297704    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:13:34.297722    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:13:34.297729    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:13:34.310298    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:13:34.310309    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:13:34.333927    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:34.333937    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:34.338463    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:13:34.338474    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:13:34.354122    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:13:34.354133    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:13:34.371410    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:13:34.371419    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:13:34.383967    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:13:34.383980    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:13:34.397075    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:34.397085    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:34.423728    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:34.423741    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:34.460937    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:13:34.460955    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:13:34.473538    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:13:34.473549    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:13:34.494321    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:13:34.494332    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:34.506819    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:34.506832    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:34.551516    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:13:34.551528    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:13:34.567198    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:13:34.567212    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:13:37.083056    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:42.085888    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:42.086249    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:42.119936    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:13:42.120085    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:42.137729    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:13:42.137802    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:42.154153    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:13:42.154226    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:42.167398    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:13:42.167456    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:42.177817    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:13:42.177889    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:42.188222    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:13:42.188284    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:42.199160    9661 logs.go:276] 0 containers: []
	W0717 11:13:42.199175    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:42.199224    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:42.209807    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:13:42.209826    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:42.209830    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:42.243385    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:13:42.243392    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:13:42.255091    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:13:42.255103    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:13:42.276382    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:42.276394    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:42.302211    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:13:42.302220    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:13:42.314427    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:42.314438    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:42.319482    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:42.319492    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:42.353908    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:13:42.353919    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:13:42.371172    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:13:42.371183    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:13:42.383274    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:13:42.383285    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:13:42.402163    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:13:42.402174    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:13:42.413596    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:13:42.413608    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:13:42.426021    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:13:42.426033    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:13:42.437709    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:13:42.437721    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:13:42.452613    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:13:42.452625    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:44.966118    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:49.968773    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:49.969264    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:50.011067    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:13:50.011196    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:50.032003    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:13:50.032094    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:50.047148    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:13:50.047219    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:50.059568    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:13:50.059624    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:50.070052    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:13:50.070115    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:50.080082    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:13:50.080145    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:50.090428    9661 logs.go:276] 0 containers: []
	W0717 11:13:50.090439    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:50.090493    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:50.101003    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:13:50.101019    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:50.101024    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:50.125291    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:50.125297    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:50.161418    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:50.161431    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:50.197277    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:13:50.197290    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:13:50.209321    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:13:50.209330    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:13:50.226444    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:13:50.226454    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:13:50.237904    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:13:50.237913    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:13:50.252084    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:13:50.252094    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:13:50.268030    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:13:50.268040    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:13:50.279786    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:13:50.279797    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:13:50.293578    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:13:50.293588    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:13:50.305092    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:13:50.305101    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:50.318495    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:50.318508    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:50.323285    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:13:50.323293    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:13:50.340580    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:13:50.340589    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:13:52.857407    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:13:57.859748    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:13:57.859847    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:13:57.872204    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:13:57.872276    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:13:57.889057    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:13:57.889145    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:13:57.901356    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:13:57.901430    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:13:57.914323    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:13:57.914383    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:13:57.929551    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:13:57.929598    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:13:57.941183    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:13:57.941235    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:13:57.958565    9661 logs.go:276] 0 containers: []
	W0717 11:13:57.958576    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:13:57.958631    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:13:57.974611    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:13:57.974629    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:13:57.974634    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:13:57.987619    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:13:57.987635    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:13:58.010784    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:13:58.010792    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:13:58.034959    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:13:58.034973    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:13:58.053669    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:13:58.053681    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:13:58.067105    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:13:58.067116    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:13:58.080770    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:13:58.080779    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:13:58.096570    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:13:58.096582    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:13:58.101082    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:13:58.101091    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:13:58.116119    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:13:58.116133    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:13:58.128296    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:13:58.128305    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:13:58.163578    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:13:58.163596    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:13:58.200655    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:13:58.200667    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:13:58.216946    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:13:58.216958    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:13:58.233505    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:13:58.233518    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:14:00.747748    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:05.750486    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:05.750965    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:05.790602    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:14:05.790722    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:05.812067    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:14:05.812155    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:05.827316    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:14:05.827393    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:05.839934    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:14:05.839999    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:05.851719    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:14:05.851787    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:05.862717    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:14:05.862790    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:05.900111    9661 logs.go:276] 0 containers: []
	W0717 11:14:05.900123    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:05.900179    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:05.918501    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:14:05.918520    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:05.918526    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:05.923126    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:14:05.923133    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:14:05.937933    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:14:05.937946    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:14:05.949217    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:14:05.949228    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:05.963052    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:05.963065    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:05.999212    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:05.999225    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:06.069689    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:14:06.069700    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:14:06.084405    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:14:06.084416    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:14:06.096570    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:14:06.096580    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:14:06.113585    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:14:06.113598    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:14:06.125338    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:14:06.125354    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:14:06.140108    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:14:06.140119    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:14:06.151787    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:14:06.151798    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:14:06.163557    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:06.163568    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:06.188646    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:14:06.188653    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:14:08.708843    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:13.711306    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:13.711774    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:13.750656    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:14:13.750772    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:13.772661    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:14:13.772759    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:13.788007    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:14:13.788071    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:13.800758    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:14:13.800823    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:13.811899    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:14:13.811955    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:13.822517    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:14:13.822573    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:13.833184    9661 logs.go:276] 0 containers: []
	W0717 11:14:13.833195    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:13.833254    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:13.844070    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:14:13.844090    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:14:13.844095    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:13.858101    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:14:13.858112    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:14:13.872695    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:14:13.872706    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:14:13.886526    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:14:13.886538    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:14:13.898259    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:14:13.898270    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:14:13.912751    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:14:13.912763    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:14:13.931324    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:13.931337    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:13.966948    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:13.966955    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:13.971773    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:14:13.971782    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:14:13.984450    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:13.984462    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:14.007291    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:14:14.007299    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:14:14.018894    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:14:14.018907    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:14:14.030623    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:14:14.030636    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:14:14.042216    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:14.042227    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:14.078976    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:14:14.078990    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:14:16.598719    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:21.601096    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:21.601450    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 11:14:21.637571    9661 logs.go:276] 1 containers: [4811e2aacf4d]
	I0717 11:14:21.637703    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 11:14:21.660831    9661 logs.go:276] 1 containers: [5d43363841b5]
	I0717 11:14:21.660940    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 11:14:21.676479    9661 logs.go:276] 4 containers: [316e00c7d849 e0d93bcf04eb 63c563bfc16d 99779a7a3777]
	I0717 11:14:21.676555    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 11:14:21.689407    9661 logs.go:276] 1 containers: [c1b8303d2e32]
	I0717 11:14:21.689475    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 11:14:21.700501    9661 logs.go:276] 1 containers: [390666f3b7f4]
	I0717 11:14:21.700566    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 11:14:21.712602    9661 logs.go:276] 1 containers: [f6d434f83d39]
	I0717 11:14:21.712664    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 11:14:21.723019    9661 logs.go:276] 0 containers: []
	W0717 11:14:21.723031    9661 logs.go:278] No container was found matching "kindnet"
	I0717 11:14:21.723104    9661 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 11:14:21.733602    9661 logs.go:276] 1 containers: [85baf31ae21c]
	I0717 11:14:21.733622    9661 logs.go:123] Gathering logs for kube-controller-manager [f6d434f83d39] ...
	I0717 11:14:21.733626    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6d434f83d39"
	I0717 11:14:21.751045    9661 logs.go:123] Gathering logs for describe nodes ...
	I0717 11:14:21.751054    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 11:14:21.786356    9661 logs.go:123] Gathering logs for kube-apiserver [4811e2aacf4d] ...
	I0717 11:14:21.786368    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4811e2aacf4d"
	I0717 11:14:21.800965    9661 logs.go:123] Gathering logs for etcd [5d43363841b5] ...
	I0717 11:14:21.800977    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d43363841b5"
	I0717 11:14:21.815855    9661 logs.go:123] Gathering logs for kube-proxy [390666f3b7f4] ...
	I0717 11:14:21.815866    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390666f3b7f4"
	I0717 11:14:21.827507    9661 logs.go:123] Gathering logs for storage-provisioner [85baf31ae21c] ...
	I0717 11:14:21.827520    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85baf31ae21c"
	I0717 11:14:21.841288    9661 logs.go:123] Gathering logs for Docker ...
	I0717 11:14:21.841298    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 11:14:21.866606    9661 logs.go:123] Gathering logs for dmesg ...
	I0717 11:14:21.866616    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 11:14:21.871211    9661 logs.go:123] Gathering logs for coredns [e0d93bcf04eb] ...
	I0717 11:14:21.871216    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0d93bcf04eb"
	I0717 11:14:21.882964    9661 logs.go:123] Gathering logs for coredns [63c563bfc16d] ...
	I0717 11:14:21.882977    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63c563bfc16d"
	I0717 11:14:21.895138    9661 logs.go:123] Gathering logs for container status ...
	I0717 11:14:21.895150    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 11:14:21.906801    9661 logs.go:123] Gathering logs for kubelet ...
	I0717 11:14:21.906809    9661 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 11:14:21.941645    9661 logs.go:123] Gathering logs for coredns [316e00c7d849] ...
	I0717 11:14:21.941665    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e00c7d849"
	I0717 11:14:21.955002    9661 logs.go:123] Gathering logs for coredns [99779a7a3777] ...
	I0717 11:14:21.955015    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99779a7a3777"
	I0717 11:14:21.967509    9661 logs.go:123] Gathering logs for kube-scheduler [c1b8303d2e32] ...
	I0717 11:14:21.967520    9661 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1b8303d2e32"
	I0717 11:14:24.494432    9661 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0717 11:14:29.496619    9661 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 11:14:29.502264    9661 out.go:177] 
	W0717 11:14:29.507271    9661 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0717 11:14:29.507286    9661 out.go:239] * 
	* 
	W0717 11:14:29.508709    9661 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:14:29.522238    9661 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-018000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.83s)

                                                
                                    
x
+
TestPause/serial/Start (9.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-051000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-051000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.814218459s)

                                                
                                                
-- stdout --
	* [pause-051000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-051000" primary control-plane node in "pause-051000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-051000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-051000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-051000 -n pause-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-051000 -n pause-051000: exit status 7 (52.183292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-337000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-337000 --driver=qemu2 : exit status 80 (9.751384459s)

                                                
                                                
-- stdout --
	* [NoKubernetes-337000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-337000" primary control-plane node in "NoKubernetes-337000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-337000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-337000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-337000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-337000 -n NoKubernetes-337000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-337000 -n NoKubernetes-337000: exit status 7 (63.996209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-337000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-337000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-337000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243167042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-337000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-337000
	* Restarting existing qemu2 VM for "NoKubernetes-337000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-337000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-337000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-337000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-337000 -n NoKubernetes-337000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-337000 -n NoKubernetes-337000: exit status 7 (58.897708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-337000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-337000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-337000 --no-kubernetes --driver=qemu2 : exit status 80 (5.23625775s)

                                                
                                                
-- stdout --
	* [NoKubernetes-337000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-337000
	* Restarting existing qemu2 VM for "NoKubernetes-337000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-337000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-337000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-337000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-337000 -n NoKubernetes-337000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-337000 -n NoKubernetes-337000: exit status 7 (53.326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-337000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-337000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-337000 --driver=qemu2 : exit status 80 (5.243145417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-337000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-337000
	* Restarting existing qemu2 VM for "NoKubernetes-337000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-337000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-337000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-337000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-337000 -n NoKubernetes-337000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-337000 -n NoKubernetes-337000: exit status 7 (31.770958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-337000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.74151025s)

                                                
                                                
-- stdout --
	* [auto-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-031000" primary control-plane node in "auto-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:12:35.531307   10101 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:12:35.531433   10101 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:12:35.531436   10101 out.go:304] Setting ErrFile to fd 2...
	I0717 11:12:35.531438   10101 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:12:35.531565   10101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:12:35.532711   10101 out.go:298] Setting JSON to false
	I0717 11:12:35.549421   10101 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6123,"bootTime":1721233832,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:12:35.549492   10101 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:12:35.556818   10101 out.go:177] * [auto-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:12:35.563509   10101 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:12:35.563633   10101 notify.go:220] Checking for updates...
	I0717 11:12:35.571512   10101 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:12:35.574567   10101 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:12:35.581589   10101 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:12:35.584566   10101 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:12:35.587547   10101 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:12:35.590859   10101 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:12:35.590920   10101 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:12:35.590969   10101 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:12:35.595558   10101 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:12:35.602557   10101 start.go:297] selected driver: qemu2
	I0717 11:12:35.602562   10101 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:12:35.602567   10101 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:12:35.604663   10101 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:12:35.607511   10101 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:12:35.610557   10101 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:12:35.610582   10101 cni.go:84] Creating CNI manager for ""
	I0717 11:12:35.610589   10101 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:12:35.610598   10101 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:12:35.610627   10101 start.go:340] cluster config:
	{Name:auto-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:auto-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:12:35.614040   10101 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:12:35.622501   10101 out.go:177] * Starting "auto-031000" primary control-plane node in "auto-031000" cluster
	I0717 11:12:35.626582   10101 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:12:35.626600   10101 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:12:35.626619   10101 cache.go:56] Caching tarball of preloaded images
	I0717 11:12:35.626676   10101 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:12:35.626683   10101 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:12:35.626769   10101 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/auto-031000/config.json ...
	I0717 11:12:35.626780   10101 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/auto-031000/config.json: {Name:mkf944b9fb3fe9f909d53738827b9a671e966e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:12:35.627083   10101 start.go:360] acquireMachinesLock for auto-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:12:35.627113   10101 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "auto-031000"
	I0717 11:12:35.627121   10101 start.go:93] Provisioning new machine with config: &{Name:auto-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:12:35.627155   10101 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:12:35.631520   10101 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:12:35.646675   10101 start.go:159] libmachine.API.Create for "auto-031000" (driver="qemu2")
	I0717 11:12:35.646711   10101 client.go:168] LocalClient.Create starting
	I0717 11:12:35.646781   10101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:12:35.646815   10101 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:35.646823   10101 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:35.646860   10101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:12:35.646883   10101 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:35.646889   10101 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:35.647274   10101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:12:35.789132   10101 main.go:141] libmachine: Creating SSH key...
	I0717 11:12:35.867304   10101 main.go:141] libmachine: Creating Disk image...
	I0717 11:12:35.867312   10101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:12:35.867465   10101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2
	I0717 11:12:35.876791   10101 main.go:141] libmachine: STDOUT: 
	I0717 11:12:35.876811   10101 main.go:141] libmachine: STDERR: 
	I0717 11:12:35.876865   10101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2 +20000M
	I0717 11:12:35.885112   10101 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:12:35.885129   10101 main.go:141] libmachine: STDERR: 
	I0717 11:12:35.885141   10101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2
	I0717 11:12:35.885145   10101 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:12:35.885157   10101 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:12:35.885183   10101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:66:fc:49:5f:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2
	I0717 11:12:35.886933   10101 main.go:141] libmachine: STDOUT: 
	I0717 11:12:35.886950   10101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:12:35.886968   10101 client.go:171] duration metric: took 240.255709ms to LocalClient.Create
	I0717 11:12:37.889068   10101 start.go:128] duration metric: took 2.261913917s to createHost
	I0717 11:12:37.889120   10101 start.go:83] releasing machines lock for "auto-031000", held for 2.262016542s
	W0717 11:12:37.889153   10101 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:12:37.898791   10101 out.go:177] * Deleting "auto-031000" in qemu2 ...
	W0717 11:12:37.921322   10101 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:12:37.921350   10101 start.go:729] Will try again in 5 seconds ...
	I0717 11:12:42.923495   10101 start.go:360] acquireMachinesLock for auto-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:12:42.923828   10101 start.go:364] duration metric: took 261.834µs to acquireMachinesLock for "auto-031000"
	I0717 11:12:42.923900   10101 start.go:93] Provisioning new machine with config: &{Name:auto-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:auto-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:12:42.924010   10101 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:12:42.933368   10101 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:12:42.970869   10101 start.go:159] libmachine.API.Create for "auto-031000" (driver="qemu2")
	I0717 11:12:42.970916   10101 client.go:168] LocalClient.Create starting
	I0717 11:12:42.971023   10101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:12:42.971100   10101 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:42.971112   10101 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:42.971177   10101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:12:42.971217   10101 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:42.971228   10101 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:42.971665   10101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:12:43.120095   10101 main.go:141] libmachine: Creating SSH key...
	I0717 11:12:43.180711   10101 main.go:141] libmachine: Creating Disk image...
	I0717 11:12:43.180719   10101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:12:43.180920   10101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2
	I0717 11:12:43.190282   10101 main.go:141] libmachine: STDOUT: 
	I0717 11:12:43.190310   10101 main.go:141] libmachine: STDERR: 
	I0717 11:12:43.190356   10101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2 +20000M
	I0717 11:12:43.198241   10101 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:12:43.198265   10101 main.go:141] libmachine: STDERR: 
	I0717 11:12:43.198277   10101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2
	I0717 11:12:43.198282   10101 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:12:43.198290   10101 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:12:43.198313   10101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:28:39:d4:62:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/auto-031000/disk.qcow2
	I0717 11:12:43.199988   10101 main.go:141] libmachine: STDOUT: 
	I0717 11:12:43.200002   10101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:12:43.200015   10101 client.go:171] duration metric: took 229.095792ms to LocalClient.Create
	I0717 11:12:45.202218   10101 start.go:128] duration metric: took 2.278195166s to createHost
	I0717 11:12:45.202335   10101 start.go:83] releasing machines lock for "auto-031000", held for 2.278506834s
	W0717 11:12:45.202807   10101 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:12:45.213452   10101 out.go:177] 
	W0717 11:12:45.217536   10101 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:12:45.217561   10101 out.go:239] * 
	* 
	W0717 11:12:45.219981   10101 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:12:45.227489   10101 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.794787958s)

                                                
                                                
-- stdout --
	* [calico-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-031000" primary control-plane node in "calico-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:12:47.454051   10220 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:12:47.454181   10220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:12:47.454185   10220 out.go:304] Setting ErrFile to fd 2...
	I0717 11:12:47.454187   10220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:12:47.454316   10220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:12:47.455434   10220 out.go:298] Setting JSON to false
	I0717 11:12:47.472120   10220 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6135,"bootTime":1721233832,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:12:47.472194   10220 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:12:47.478899   10220 out.go:177] * [calico-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:12:47.486813   10220 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:12:47.486876   10220 notify.go:220] Checking for updates...
	I0717 11:12:47.493777   10220 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:12:47.496814   10220 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:12:47.498228   10220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:12:47.501747   10220 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:12:47.504781   10220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:12:47.508220   10220 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:12:47.508287   10220 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:12:47.508339   10220 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:12:47.512755   10220 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:12:47.519833   10220 start.go:297] selected driver: qemu2
	I0717 11:12:47.519839   10220 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:12:47.519845   10220 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:12:47.522374   10220 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:12:47.525801   10220 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:12:47.528821   10220 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:12:47.528835   10220 cni.go:84] Creating CNI manager for "calico"
	I0717 11:12:47.528840   10220 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0717 11:12:47.528868   10220 start.go:340] cluster config:
	{Name:calico-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:calico-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:12:47.532729   10220 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:12:47.540772   10220 out.go:177] * Starting "calico-031000" primary control-plane node in "calico-031000" cluster
	I0717 11:12:47.544844   10220 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:12:47.544859   10220 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:12:47.544872   10220 cache.go:56] Caching tarball of preloaded images
	I0717 11:12:47.544931   10220 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:12:47.544939   10220 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:12:47.545015   10220 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/calico-031000/config.json ...
	I0717 11:12:47.545033   10220 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/calico-031000/config.json: {Name:mkedf92f2d4162f7c66a44d220fe63b5d8399e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:12:47.545362   10220 start.go:360] acquireMachinesLock for calico-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:12:47.545395   10220 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "calico-031000"
	I0717 11:12:47.545405   10220 start.go:93] Provisioning new machine with config: &{Name:calico-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:12:47.545438   10220 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:12:47.549788   10220 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:12:47.567306   10220 start.go:159] libmachine.API.Create for "calico-031000" (driver="qemu2")
	I0717 11:12:47.567336   10220 client.go:168] LocalClient.Create starting
	I0717 11:12:47.567403   10220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:12:47.567435   10220 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:47.567448   10220 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:47.567490   10220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:12:47.567513   10220 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:47.567522   10220 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:47.567903   10220 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:12:47.711160   10220 main.go:141] libmachine: Creating SSH key...
	I0717 11:12:47.779125   10220 main.go:141] libmachine: Creating Disk image...
	I0717 11:12:47.779136   10220 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:12:47.779321   10220 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2
	I0717 11:12:47.788799   10220 main.go:141] libmachine: STDOUT: 
	I0717 11:12:47.788818   10220 main.go:141] libmachine: STDERR: 
	I0717 11:12:47.788866   10220 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2 +20000M
	I0717 11:12:47.796695   10220 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:12:47.796708   10220 main.go:141] libmachine: STDERR: 
	I0717 11:12:47.796720   10220 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2
	I0717 11:12:47.796723   10220 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:12:47.796739   10220 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:12:47.796766   10220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:92:f2:6b:95:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2
	I0717 11:12:47.798282   10220 main.go:141] libmachine: STDOUT: 
	I0717 11:12:47.798297   10220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:12:47.798314   10220 client.go:171] duration metric: took 230.975584ms to LocalClient.Create
	I0717 11:12:49.800476   10220 start.go:128] duration metric: took 2.255029417s to createHost
	I0717 11:12:49.800525   10220 start.go:83] releasing machines lock for "calico-031000", held for 2.255136958s
	W0717 11:12:49.800597   10220 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:12:49.813010   10220 out.go:177] * Deleting "calico-031000" in qemu2 ...
	W0717 11:12:49.838732   10220 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:12:49.838761   10220 start.go:729] Will try again in 5 seconds ...
	I0717 11:12:54.839289   10220 start.go:360] acquireMachinesLock for calico-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:12:54.839524   10220 start.go:364] duration metric: took 195.375µs to acquireMachinesLock for "calico-031000"
	I0717 11:12:54.839552   10220 start.go:93] Provisioning new machine with config: &{Name:calico-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:calico-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:12:54.839637   10220 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:12:54.847530   10220 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:12:54.870482   10220 start.go:159] libmachine.API.Create for "calico-031000" (driver="qemu2")
	I0717 11:12:54.870527   10220 client.go:168] LocalClient.Create starting
	I0717 11:12:54.870641   10220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:12:54.870684   10220 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:54.870696   10220 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:54.870740   10220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:12:54.870768   10220 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:54.870777   10220 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:54.871139   10220 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:12:55.014165   10220 main.go:141] libmachine: Creating SSH key...
	I0717 11:12:55.153249   10220 main.go:141] libmachine: Creating Disk image...
	I0717 11:12:55.153261   10220 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:12:55.153468   10220 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2
	I0717 11:12:55.163893   10220 main.go:141] libmachine: STDOUT: 
	I0717 11:12:55.163924   10220 main.go:141] libmachine: STDERR: 
	I0717 11:12:55.163993   10220 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2 +20000M
	I0717 11:12:55.173432   10220 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:12:55.173453   10220 main.go:141] libmachine: STDERR: 
	I0717 11:12:55.173477   10220 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2
	I0717 11:12:55.173483   10220 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:12:55.173494   10220 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:12:55.173520   10220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:5d:f7:df:c5:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/calico-031000/disk.qcow2
	I0717 11:12:55.175572   10220 main.go:141] libmachine: STDOUT: 
	I0717 11:12:55.175590   10220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:12:55.175610   10220 client.go:171] duration metric: took 305.071ms to LocalClient.Create
	I0717 11:12:57.177817   10220 start.go:128] duration metric: took 2.338161833s to createHost
	I0717 11:12:57.177895   10220 start.go:83] releasing machines lock for "calico-031000", held for 2.33836875s
	W0717 11:12:57.178377   10220 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:12:57.190079   10220 out.go:177] 
	W0717 11:12:57.194160   10220 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:12:57.194205   10220 out.go:239] * 
	* 
	W0717 11:12:57.196794   10220 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:12:57.207069   10220 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.815208333s)

                                                
                                                
-- stdout --
	* [custom-flannel-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-031000" primary control-plane node in "custom-flannel-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:12:59.629609   10341 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:12:59.629747   10341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:12:59.629751   10341 out.go:304] Setting ErrFile to fd 2...
	I0717 11:12:59.629753   10341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:12:59.629878   10341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:12:59.630958   10341 out.go:298] Setting JSON to false
	I0717 11:12:59.647066   10341 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6147,"bootTime":1721233832,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:12:59.647133   10341 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:12:59.653073   10341 out.go:177] * [custom-flannel-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:12:59.661108   10341 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:12:59.661202   10341 notify.go:220] Checking for updates...
	I0717 11:12:59.668106   10341 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:12:59.671137   10341 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:12:59.674055   10341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:12:59.677153   10341 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:12:59.680113   10341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:12:59.683342   10341 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:12:59.683406   10341 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:12:59.683446   10341 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:12:59.688062   10341 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:12:59.695044   10341 start.go:297] selected driver: qemu2
	I0717 11:12:59.695051   10341 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:12:59.695058   10341 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:12:59.697248   10341 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:12:59.700094   10341 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:12:59.703160   10341 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:12:59.703182   10341 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0717 11:12:59.703189   10341 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0717 11:12:59.703220   10341 start.go:340] cluster config:
	{Name:custom-flannel-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:12:59.706796   10341 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:12:59.715111   10341 out.go:177] * Starting "custom-flannel-031000" primary control-plane node in "custom-flannel-031000" cluster
	I0717 11:12:59.717975   10341 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:12:59.717992   10341 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:12:59.718004   10341 cache.go:56] Caching tarball of preloaded images
	I0717 11:12:59.718058   10341 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:12:59.718063   10341 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:12:59.718126   10341 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/custom-flannel-031000/config.json ...
	I0717 11:12:59.718139   10341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/custom-flannel-031000/config.json: {Name:mka8f2e4e2a245da330b9dbe876a5efec959321a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:12:59.718332   10341 start.go:360] acquireMachinesLock for custom-flannel-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:12:59.718363   10341 start.go:364] duration metric: took 23.959µs to acquireMachinesLock for "custom-flannel-031000"
	I0717 11:12:59.718373   10341 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:12:59.718412   10341 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:12:59.726030   10341 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:12:59.741229   10341 start.go:159] libmachine.API.Create for "custom-flannel-031000" (driver="qemu2")
	I0717 11:12:59.741256   10341 client.go:168] LocalClient.Create starting
	I0717 11:12:59.741325   10341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:12:59.741356   10341 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:59.741369   10341 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:59.741416   10341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:12:59.741438   10341 main.go:141] libmachine: Decoding PEM data...
	I0717 11:12:59.741444   10341 main.go:141] libmachine: Parsing certificate...
	I0717 11:12:59.741771   10341 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:12:59.895879   10341 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:00.033689   10341 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:00.033698   10341 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:00.033868   10341 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2
	I0717 11:13:00.043271   10341 main.go:141] libmachine: STDOUT: 
	I0717 11:13:00.043289   10341 main.go:141] libmachine: STDERR: 
	I0717 11:13:00.043352   10341 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2 +20000M
	I0717 11:13:00.051289   10341 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:00.051305   10341 main.go:141] libmachine: STDERR: 
	I0717 11:13:00.051325   10341 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2
	I0717 11:13:00.051330   10341 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:00.051344   10341 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:00.051368   10341 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:0b:dd:fc:7d:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2
	I0717 11:13:00.053036   10341 main.go:141] libmachine: STDOUT: 
	I0717 11:13:00.053052   10341 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:00.053068   10341 client.go:171] duration metric: took 311.811958ms to LocalClient.Create
	I0717 11:13:02.055381   10341 start.go:128] duration metric: took 2.336926792s to createHost
	I0717 11:13:02.055503   10341 start.go:83] releasing machines lock for "custom-flannel-031000", held for 2.337145875s
	W0717 11:13:02.055558   10341 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:02.064008   10341 out.go:177] * Deleting "custom-flannel-031000" in qemu2 ...
	W0717 11:13:02.090756   10341 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:02.090787   10341 start.go:729] Will try again in 5 seconds ...
	I0717 11:13:07.092966   10341 start.go:360] acquireMachinesLock for custom-flannel-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:07.093389   10341 start.go:364] duration metric: took 313.917µs to acquireMachinesLock for "custom-flannel-031000"
	I0717 11:13:07.093533   10341 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:07.093802   10341 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:07.100412   10341 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:07.150672   10341 start.go:159] libmachine.API.Create for "custom-flannel-031000" (driver="qemu2")
	I0717 11:13:07.150728   10341 client.go:168] LocalClient.Create starting
	I0717 11:13:07.150849   10341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:07.150918   10341 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:07.150934   10341 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:07.150992   10341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:07.151036   10341 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:07.151049   10341 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:07.151599   10341 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:07.303625   10341 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:07.357271   10341 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:07.357281   10341 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:07.357486   10341 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2
	I0717 11:13:07.367812   10341 main.go:141] libmachine: STDOUT: 
	I0717 11:13:07.367836   10341 main.go:141] libmachine: STDERR: 
	I0717 11:13:07.367910   10341 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2 +20000M
	I0717 11:13:07.376469   10341 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:07.376488   10341 main.go:141] libmachine: STDERR: 
	I0717 11:13:07.376515   10341 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2
	I0717 11:13:07.376519   10341 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:07.376529   10341 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:07.376564   10341 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:fc:11:93:b4:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/custom-flannel-031000/disk.qcow2
	I0717 11:13:07.378214   10341 main.go:141] libmachine: STDOUT: 
	I0717 11:13:07.378230   10341 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:07.378243   10341 client.go:171] duration metric: took 227.510916ms to LocalClient.Create
	I0717 11:13:09.380393   10341 start.go:128] duration metric: took 2.286547208s to createHost
	I0717 11:13:09.380428   10341 start.go:83] releasing machines lock for "custom-flannel-031000", held for 2.287034208s
	W0717 11:13:09.380585   10341 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:09.388993   10341 out.go:177] 
	W0717 11:13:09.395049   10341 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:13:09.395063   10341 out.go:239] * 
	* 
	W0717 11:13:09.396384   10341 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:13:09.406988   10341 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.752010083s)

                                                
                                                
-- stdout --
	* [false-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-031000" primary control-plane node in "false-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:13:11.789194   10470 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:13:11.789324   10470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:11.789329   10470 out.go:304] Setting ErrFile to fd 2...
	I0717 11:13:11.789332   10470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:11.789464   10470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:13:11.790571   10470 out.go:298] Setting JSON to false
	I0717 11:13:11.807193   10470 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6159,"bootTime":1721233832,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:13:11.807269   10470 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:13:11.814160   10470 out.go:177] * [false-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:13:11.821125   10470 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:13:11.821181   10470 notify.go:220] Checking for updates...
	I0717 11:13:11.828043   10470 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:13:11.831042   10470 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:13:11.834128   10470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:13:11.837047   10470 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:13:11.840037   10470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:13:11.843338   10470 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:13:11.843404   10470 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:13:11.843455   10470 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:13:11.846950   10470 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:13:11.854064   10470 start.go:297] selected driver: qemu2
	I0717 11:13:11.854070   10470 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:13:11.854077   10470 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:13:11.856520   10470 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:13:11.859005   10470 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:13:11.862135   10470 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:13:11.862155   10470 cni.go:84] Creating CNI manager for "false"
	I0717 11:13:11.862180   10470 start.go:340] cluster config:
	{Name:false-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:false-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:13:11.865604   10470 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:13:11.874058   10470 out.go:177] * Starting "false-031000" primary control-plane node in "false-031000" cluster
	I0717 11:13:11.878074   10470 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:13:11.878089   10470 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:13:11.878100   10470 cache.go:56] Caching tarball of preloaded images
	I0717 11:13:11.878150   10470 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:13:11.878155   10470 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:13:11.878206   10470 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/false-031000/config.json ...
	I0717 11:13:11.878218   10470 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/false-031000/config.json: {Name:mk393933ef5c340b3d8f8601168f5f1511f809c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:13:11.878414   10470 start.go:360] acquireMachinesLock for false-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:11.878445   10470 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "false-031000"
	I0717 11:13:11.878454   10470 start.go:93] Provisioning new machine with config: &{Name:false-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:11.878488   10470 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:11.888105   10470 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:11.903432   10470 start.go:159] libmachine.API.Create for "false-031000" (driver="qemu2")
	I0717 11:13:11.903462   10470 client.go:168] LocalClient.Create starting
	I0717 11:13:11.903525   10470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:11.903557   10470 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:11.903565   10470 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:11.903606   10470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:11.903631   10470 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:11.903639   10470 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:11.903991   10470 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:12.045818   10470 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:12.128310   10470 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:12.128316   10470 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:12.128473   10470 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2
	I0717 11:13:12.137879   10470 main.go:141] libmachine: STDOUT: 
	I0717 11:13:12.137900   10470 main.go:141] libmachine: STDERR: 
	I0717 11:13:12.137967   10470 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2 +20000M
	I0717 11:13:12.146010   10470 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:12.146023   10470 main.go:141] libmachine: STDERR: 
	I0717 11:13:12.146041   10470 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2
	I0717 11:13:12.146047   10470 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:12.146061   10470 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:12.146097   10470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:1d:f1:15:47:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2
	I0717 11:13:12.147750   10470 main.go:141] libmachine: STDOUT: 
	I0717 11:13:12.147768   10470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:12.147793   10470 client.go:171] duration metric: took 244.329666ms to LocalClient.Create
	I0717 11:13:14.150001   10470 start.go:128] duration metric: took 2.271497333s to createHost
	I0717 11:13:14.150106   10470 start.go:83] releasing machines lock for "false-031000", held for 2.271667042s
	W0717 11:13:14.150204   10470 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:14.161362   10470 out.go:177] * Deleting "false-031000" in qemu2 ...
	W0717 11:13:14.189068   10470 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:14.189102   10470 start.go:729] Will try again in 5 seconds ...
	I0717 11:13:19.191345   10470 start.go:360] acquireMachinesLock for false-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:19.191885   10470 start.go:364] duration metric: took 412.083µs to acquireMachinesLock for "false-031000"
	I0717 11:13:19.191960   10470 start.go:93] Provisioning new machine with config: &{Name:false-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:false-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:19.192191   10470 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:19.200886   10470 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:19.250490   10470 start.go:159] libmachine.API.Create for "false-031000" (driver="qemu2")
	I0717 11:13:19.250548   10470 client.go:168] LocalClient.Create starting
	I0717 11:13:19.250682   10470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:19.250766   10470 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:19.250785   10470 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:19.250860   10470 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:19.250906   10470 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:19.250919   10470 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:19.251447   10470 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:19.403959   10470 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:19.445398   10470 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:19.445404   10470 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:19.445575   10470 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2
	I0717 11:13:19.455557   10470 main.go:141] libmachine: STDOUT: 
	I0717 11:13:19.455582   10470 main.go:141] libmachine: STDERR: 
	I0717 11:13:19.455662   10470 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2 +20000M
	I0717 11:13:19.465086   10470 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:19.465109   10470 main.go:141] libmachine: STDERR: 
	I0717 11:13:19.465122   10470 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2
	I0717 11:13:19.465131   10470 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:19.465152   10470 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:19.465188   10470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:2a:b5:e1:d5:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/false-031000/disk.qcow2
	I0717 11:13:19.467315   10470 main.go:141] libmachine: STDOUT: 
	I0717 11:13:19.467333   10470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:19.467347   10470 client.go:171] duration metric: took 216.792416ms to LocalClient.Create
	I0717 11:13:21.469618   10470 start.go:128] duration metric: took 2.277410542s to createHost
	I0717 11:13:21.469684   10470 start.go:83] releasing machines lock for "false-031000", held for 2.277785667s
	W0717 11:13:21.470050   10470 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:21.480573   10470 out.go:177] 
	W0717 11:13:21.486552   10470 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:13:21.486584   10470 out.go:239] * 
	* 
	W0717 11:13:21.489408   10470 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:13:21.498466   10470 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.793245584s)

                                                
                                                
-- stdout --
	* [kindnet-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-031000" primary control-plane node in "kindnet-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:13:23.695639   10592 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:13:23.695782   10592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:23.695786   10592 out.go:304] Setting ErrFile to fd 2...
	I0717 11:13:23.695788   10592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:23.695911   10592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:13:23.696998   10592 out.go:298] Setting JSON to false
	I0717 11:13:23.712967   10592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6171,"bootTime":1721233832,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:13:23.713061   10592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:13:23.719494   10592 out.go:177] * [kindnet-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:13:23.727439   10592 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:13:23.727489   10592 notify.go:220] Checking for updates...
	I0717 11:13:23.735392   10592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:13:23.738468   10592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:13:23.741460   10592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:13:23.742788   10592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:13:23.745404   10592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:13:23.748748   10592 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:13:23.748817   10592 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:13:23.748869   10592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:13:23.752255   10592 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:13:23.759382   10592 start.go:297] selected driver: qemu2
	I0717 11:13:23.759387   10592 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:13:23.759393   10592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:13:23.761681   10592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:13:23.764501   10592 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:13:23.767540   10592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:13:23.767572   10592 cni.go:84] Creating CNI manager for "kindnet"
	I0717 11:13:23.767582   10592 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 11:13:23.767605   10592 start.go:340] cluster config:
	{Name:kindnet-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kindnet-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:13:23.771139   10592 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:13:23.779389   10592 out.go:177] * Starting "kindnet-031000" primary control-plane node in "kindnet-031000" cluster
	I0717 11:13:23.783408   10592 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:13:23.783424   10592 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:13:23.783440   10592 cache.go:56] Caching tarball of preloaded images
	I0717 11:13:23.783500   10592 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:13:23.783507   10592 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:13:23.783569   10592 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/kindnet-031000/config.json ...
	I0717 11:13:23.783583   10592 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/kindnet-031000/config.json: {Name:mk97ab78736e9981033c053105efe11b1c4b0956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:13:23.783794   10592 start.go:360] acquireMachinesLock for kindnet-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:23.783832   10592 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "kindnet-031000"
	I0717 11:13:23.783842   10592 start.go:93] Provisioning new machine with config: &{Name:kindnet-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:23.783872   10592 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:23.792421   10592 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:23.808241   10592 start.go:159] libmachine.API.Create for "kindnet-031000" (driver="qemu2")
	I0717 11:13:23.808279   10592 client.go:168] LocalClient.Create starting
	I0717 11:13:23.808346   10592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:23.808376   10592 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:23.808387   10592 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:23.808426   10592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:23.808448   10592 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:23.808460   10592 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:23.808805   10592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:23.950466   10592 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:24.066282   10592 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:24.066293   10592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:24.066454   10592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2
	I0717 11:13:24.076067   10592 main.go:141] libmachine: STDOUT: 
	I0717 11:13:24.076097   10592 main.go:141] libmachine: STDERR: 
	I0717 11:13:24.076158   10592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2 +20000M
	I0717 11:13:24.084367   10592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:24.084383   10592 main.go:141] libmachine: STDERR: 
	I0717 11:13:24.084403   10592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2
	I0717 11:13:24.084408   10592 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:24.084420   10592 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:24.084444   10592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:12:00:4a:c4:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2
	I0717 11:13:24.086155   10592 main.go:141] libmachine: STDOUT: 
	I0717 11:13:24.086169   10592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:24.086189   10592 client.go:171] duration metric: took 277.908458ms to LocalClient.Create
	I0717 11:13:26.088358   10592 start.go:128] duration metric: took 2.304474334s to createHost
	I0717 11:13:26.088427   10592 start.go:83] releasing machines lock for "kindnet-031000", held for 2.304602125s
	W0717 11:13:26.088503   10592 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:26.095555   10592 out.go:177] * Deleting "kindnet-031000" in qemu2 ...
	W0717 11:13:26.118918   10592 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:26.118952   10592 start.go:729] Will try again in 5 seconds ...
	I0717 11:13:31.121277   10592 start.go:360] acquireMachinesLock for kindnet-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:31.121835   10592 start.go:364] duration metric: took 438.708µs to acquireMachinesLock for "kindnet-031000"
	I0717 11:13:31.121909   10592 start.go:93] Provisioning new machine with config: &{Name:kindnet-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kindnet-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:31.122251   10592 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:31.128002   10592 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:31.176295   10592 start.go:159] libmachine.API.Create for "kindnet-031000" (driver="qemu2")
	I0717 11:13:31.176342   10592 client.go:168] LocalClient.Create starting
	I0717 11:13:31.176466   10592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:31.176543   10592 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:31.176562   10592 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:31.176626   10592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:31.176671   10592 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:31.176686   10592 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:31.177193   10592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:31.328792   10592 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:31.396045   10592 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:31.396056   10592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:31.396242   10592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2
	I0717 11:13:31.405880   10592 main.go:141] libmachine: STDOUT: 
	I0717 11:13:31.405896   10592 main.go:141] libmachine: STDERR: 
	I0717 11:13:31.405961   10592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2 +20000M
	I0717 11:13:31.413885   10592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:31.413903   10592 main.go:141] libmachine: STDERR: 
	I0717 11:13:31.413915   10592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2
	I0717 11:13:31.413923   10592 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:31.413931   10592 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:31.413970   10592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:c6:14:af:65:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kindnet-031000/disk.qcow2
	I0717 11:13:31.415875   10592 main.go:141] libmachine: STDOUT: 
	I0717 11:13:31.415890   10592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:31.415902   10592 client.go:171] duration metric: took 239.558ms to LocalClient.Create
	I0717 11:13:33.417995   10592 start.go:128] duration metric: took 2.295714167s to createHost
	I0717 11:13:33.418038   10592 start.go:83] releasing machines lock for "kindnet-031000", held for 2.296191916s
	W0717 11:13:33.418223   10592 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:33.429710   10592 out.go:177] 
	W0717 11:13:33.434733   10592 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:13:33.434746   10592 out.go:239] * 
	* 
	W0717 11:13:33.436050   10592 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:13:33.449787   10592 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.868601458s)

                                                
                                                
-- stdout --
	* [flannel-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-031000" primary control-plane node in "flannel-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:13:35.706746   10711 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:13:35.706891   10711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:35.706895   10711 out.go:304] Setting ErrFile to fd 2...
	I0717 11:13:35.706897   10711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:35.707028   10711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:13:35.708138   10711 out.go:298] Setting JSON to false
	I0717 11:13:35.724302   10711 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6183,"bootTime":1721233832,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:13:35.724370   10711 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:13:35.730754   10711 out.go:177] * [flannel-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:13:35.738751   10711 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:13:35.738816   10711 notify.go:220] Checking for updates...
	I0717 11:13:35.745783   10711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:13:35.748718   10711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:13:35.751749   10711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:13:35.754744   10711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:13:35.757646   10711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:13:35.761094   10711 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:13:35.761166   10711 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:13:35.761218   10711 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:13:35.765742   10711 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:13:35.772749   10711 start.go:297] selected driver: qemu2
	I0717 11:13:35.772755   10711 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:13:35.772762   10711 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:13:35.775132   10711 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:13:35.778756   10711 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:13:35.781741   10711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:13:35.781771   10711 cni.go:84] Creating CNI manager for "flannel"
	I0717 11:13:35.781777   10711 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0717 11:13:35.781825   10711 start.go:340] cluster config:
	{Name:flannel-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:flannel-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:13:35.785852   10711 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:13:35.794742   10711 out.go:177] * Starting "flannel-031000" primary control-plane node in "flannel-031000" cluster
	I0717 11:13:35.798693   10711 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:13:35.798710   10711 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:13:35.798723   10711 cache.go:56] Caching tarball of preloaded images
	I0717 11:13:35.798795   10711 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:13:35.798800   10711 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:13:35.798858   10711 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/flannel-031000/config.json ...
	I0717 11:13:35.798871   10711 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/flannel-031000/config.json: {Name:mkdd5e8d94e3b5b000d052d0d5028f64b818c52d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:13:35.799131   10711 start.go:360] acquireMachinesLock for flannel-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:35.799162   10711 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "flannel-031000"
	I0717 11:13:35.799171   10711 start.go:93] Provisioning new machine with config: &{Name:flannel-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:35.799199   10711 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:35.807719   10711 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:35.824604   10711 start.go:159] libmachine.API.Create for "flannel-031000" (driver="qemu2")
	I0717 11:13:35.824631   10711 client.go:168] LocalClient.Create starting
	I0717 11:13:35.824692   10711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:35.824723   10711 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:35.824734   10711 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:35.824773   10711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:35.824795   10711 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:35.824809   10711 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:35.825137   10711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:35.967859   10711 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:36.147427   10711 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:36.147437   10711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:36.147619   10711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2
	I0717 11:13:36.156934   10711 main.go:141] libmachine: STDOUT: 
	I0717 11:13:36.156952   10711 main.go:141] libmachine: STDERR: 
	I0717 11:13:36.157001   10711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2 +20000M
	I0717 11:13:36.165105   10711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:36.165123   10711 main.go:141] libmachine: STDERR: 
	I0717 11:13:36.165136   10711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2
	I0717 11:13:36.165141   10711 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:36.165152   10711 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:36.165179   10711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:d7:45:05:3c:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2
	I0717 11:13:36.166915   10711 main.go:141] libmachine: STDOUT: 
	I0717 11:13:36.166928   10711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:36.166946   10711 client.go:171] duration metric: took 342.313167ms to LocalClient.Create
	I0717 11:13:38.169161   10711 start.go:128] duration metric: took 2.369945042s to createHost
	I0717 11:13:38.169259   10711 start.go:83] releasing machines lock for "flannel-031000", held for 2.370103167s
	W0717 11:13:38.169321   10711 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:38.179410   10711 out.go:177] * Deleting "flannel-031000" in qemu2 ...
	W0717 11:13:38.205032   10711 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:38.205060   10711 start.go:729] Will try again in 5 seconds ...
	I0717 11:13:43.207221   10711 start.go:360] acquireMachinesLock for flannel-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:43.207826   10711 start.go:364] duration metric: took 487.084µs to acquireMachinesLock for "flannel-031000"
	I0717 11:13:43.207918   10711 start.go:93] Provisioning new machine with config: &{Name:flannel-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:flannel-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:43.208268   10711 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:43.217941   10711 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:43.268972   10711 start.go:159] libmachine.API.Create for "flannel-031000" (driver="qemu2")
	I0717 11:13:43.269028   10711 client.go:168] LocalClient.Create starting
	I0717 11:13:43.269151   10711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:43.269229   10711 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:43.269247   10711 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:43.269321   10711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:43.269368   10711 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:43.269383   10711 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:43.269938   10711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:43.423758   10711 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:43.481366   10711 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:43.481371   10711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:43.481545   10711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2
	I0717 11:13:43.491870   10711 main.go:141] libmachine: STDOUT: 
	I0717 11:13:43.491888   10711 main.go:141] libmachine: STDERR: 
	I0717 11:13:43.491950   10711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2 +20000M
	I0717 11:13:43.500166   10711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:43.500180   10711 main.go:141] libmachine: STDERR: 
	I0717 11:13:43.500191   10711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2
	I0717 11:13:43.500195   10711 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:43.500205   10711 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:43.500233   10711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:74:4c:43:f1:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/flannel-031000/disk.qcow2
	I0717 11:13:43.501882   10711 main.go:141] libmachine: STDOUT: 
	I0717 11:13:43.501895   10711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:43.501908   10711 client.go:171] duration metric: took 232.875584ms to LocalClient.Create
	I0717 11:13:45.504107   10711 start.go:128] duration metric: took 2.295815375s to createHost
	I0717 11:13:45.504213   10711 start.go:83] releasing machines lock for "flannel-031000", held for 2.296360208s
	W0717 11:13:45.504616   10711 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:45.516366   10711 out.go:177] 
	W0717 11:13:45.521337   10711 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:13:45.521366   10711 out.go:239] * 
	* 
	W0717 11:13:45.524104   10711 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:13:45.534241   10711 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.779661292s)

                                                
                                                
-- stdout --
	* [enable-default-cni-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-031000" primary control-plane node in "enable-default-cni-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:13:47.836978   10839 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:13:47.837093   10839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:47.837096   10839 out.go:304] Setting ErrFile to fd 2...
	I0717 11:13:47.837099   10839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:47.837228   10839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:13:47.838389   10839 out.go:298] Setting JSON to false
	I0717 11:13:47.855386   10839 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6195,"bootTime":1721233832,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:13:47.855455   10839 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:13:47.860481   10839 out.go:177] * [enable-default-cni-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:13:47.867500   10839 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:13:47.867579   10839 notify.go:220] Checking for updates...
	I0717 11:13:47.874551   10839 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:13:47.877463   10839 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:13:47.880535   10839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:13:47.883531   10839 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:13:47.884888   10839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:13:47.887724   10839 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:13:47.887788   10839 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:13:47.887842   10839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:13:47.892482   10839 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:13:47.897496   10839 start.go:297] selected driver: qemu2
	I0717 11:13:47.897502   10839 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:13:47.897508   10839 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:13:47.899680   10839 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:13:47.902463   10839 out.go:177] * Automatically selected the socket_vmnet network
	E0717 11:13:47.905551   10839 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0717 11:13:47.905564   10839 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:13:47.905576   10839 cni.go:84] Creating CNI manager for "bridge"
	I0717 11:13:47.905583   10839 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:13:47.905612   10839 start.go:340] cluster config:
	{Name:enable-default-cni-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:13:47.908999   10839 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:13:47.917506   10839 out.go:177] * Starting "enable-default-cni-031000" primary control-plane node in "enable-default-cni-031000" cluster
	I0717 11:13:47.921409   10839 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:13:47.921423   10839 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:13:47.921432   10839 cache.go:56] Caching tarball of preloaded images
	I0717 11:13:47.921487   10839 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:13:47.921493   10839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:13:47.921541   10839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/enable-default-cni-031000/config.json ...
	I0717 11:13:47.921552   10839 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/enable-default-cni-031000/config.json: {Name:mk786f2fae8984fa02a264b00f51637fffe873fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:13:47.921749   10839 start.go:360] acquireMachinesLock for enable-default-cni-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:47.921783   10839 start.go:364] duration metric: took 24.792µs to acquireMachinesLock for "enable-default-cni-031000"
	I0717 11:13:47.921793   10839 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:47.921820   10839 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:47.930434   10839 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:47.945549   10839 start.go:159] libmachine.API.Create for "enable-default-cni-031000" (driver="qemu2")
	I0717 11:13:47.945576   10839 client.go:168] LocalClient.Create starting
	I0717 11:13:47.945645   10839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:47.945679   10839 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:47.945686   10839 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:47.945722   10839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:47.945744   10839 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:47.945752   10839 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:47.946173   10839 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:48.087755   10839 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:48.211465   10839 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:48.211474   10839 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:48.211651   10839 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2
	I0717 11:13:48.220795   10839 main.go:141] libmachine: STDOUT: 
	I0717 11:13:48.220813   10839 main.go:141] libmachine: STDERR: 
	I0717 11:13:48.220858   10839 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2 +20000M
	I0717 11:13:48.228746   10839 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:48.228765   10839 main.go:141] libmachine: STDERR: 
	I0717 11:13:48.228777   10839 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2
	I0717 11:13:48.228782   10839 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:48.228793   10839 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:48.228832   10839 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:3d:bd:a3:e3:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2
	I0717 11:13:48.230557   10839 main.go:141] libmachine: STDOUT: 
	I0717 11:13:48.230576   10839 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:48.230595   10839 client.go:171] duration metric: took 285.017917ms to LocalClient.Create
	I0717 11:13:50.232701   10839 start.go:128] duration metric: took 2.310886584s to createHost
	I0717 11:13:50.232720   10839 start.go:83] releasing machines lock for "enable-default-cni-031000", held for 2.310948792s
	W0717 11:13:50.232735   10839 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:50.242688   10839 out.go:177] * Deleting "enable-default-cni-031000" in qemu2 ...
	W0717 11:13:50.251913   10839 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:50.251931   10839 start.go:729] Will try again in 5 seconds ...
	I0717 11:13:55.254162   10839 start.go:360] acquireMachinesLock for enable-default-cni-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:55.254784   10839 start.go:364] duration metric: took 402.5µs to acquireMachinesLock for "enable-default-cni-031000"
	I0717 11:13:55.254856   10839 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:55.255164   10839 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:55.260840   10839 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:55.308859   10839 start.go:159] libmachine.API.Create for "enable-default-cni-031000" (driver="qemu2")
	I0717 11:13:55.308921   10839 client.go:168] LocalClient.Create starting
	I0717 11:13:55.309038   10839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:55.309118   10839 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:55.309132   10839 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:55.309224   10839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:55.309277   10839 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:55.309287   10839 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:55.309803   10839 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:55.462194   10839 main.go:141] libmachine: Creating SSH key...
	I0717 11:13:55.534756   10839 main.go:141] libmachine: Creating Disk image...
	I0717 11:13:55.534762   10839 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:13:55.534919   10839 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2
	I0717 11:13:55.544081   10839 main.go:141] libmachine: STDOUT: 
	I0717 11:13:55.544100   10839 main.go:141] libmachine: STDERR: 
	I0717 11:13:55.544153   10839 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2 +20000M
	I0717 11:13:55.552086   10839 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:13:55.552108   10839 main.go:141] libmachine: STDERR: 
	I0717 11:13:55.552119   10839 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2
	I0717 11:13:55.552125   10839 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:13:55.552131   10839 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:13:55.552165   10839 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:6a:54:46:d7:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/enable-default-cni-031000/disk.qcow2
	I0717 11:13:55.553874   10839 main.go:141] libmachine: STDOUT: 
	I0717 11:13:55.553888   10839 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:13:55.553900   10839 client.go:171] duration metric: took 244.97475ms to LocalClient.Create
	I0717 11:13:57.556007   10839 start.go:128] duration metric: took 2.300836334s to createHost
	I0717 11:13:57.556054   10839 start.go:83] releasing machines lock for "enable-default-cni-031000", held for 2.301263333s
	W0717 11:13:57.556244   10839 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:13:57.567444   10839 out.go:177] 
	W0717 11:13:57.571624   10839 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:13:57.571641   10839 out.go:239] * 
	* 
	W0717 11:13:57.572920   10839 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:13:57.582570   10839 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.111196667s)

                                                
                                                
-- stdout --
	* [bridge-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-031000" primary control-plane node in "bridge-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:13:59.714806   10956 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:13:59.714974   10956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:59.714977   10956 out.go:304] Setting ErrFile to fd 2...
	I0717 11:13:59.714980   10956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:13:59.715109   10956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:13:59.716089   10956 out.go:298] Setting JSON to false
	I0717 11:13:59.732136   10956 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6207,"bootTime":1721233832,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:13:59.732198   10956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:13:59.738308   10956 out.go:177] * [bridge-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:13:59.746464   10956 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:13:59.746503   10956 notify.go:220] Checking for updates...
	I0717 11:13:59.753414   10956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:13:59.756493   10956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:13:59.759424   10956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:13:59.762403   10956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:13:59.765456   10956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:13:59.768695   10956 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:13:59.768764   10956 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:13:59.768816   10956 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:13:59.773392   10956 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:13:59.779424   10956 start.go:297] selected driver: qemu2
	I0717 11:13:59.779431   10956 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:13:59.779438   10956 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:13:59.781669   10956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:13:59.784398   10956 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:13:59.787528   10956 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:13:59.787545   10956 cni.go:84] Creating CNI manager for "bridge"
	I0717 11:13:59.787549   10956 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:13:59.787576   10956 start.go:340] cluster config:
	{Name:bridge-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:13:59.791070   10956 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:13:59.799403   10956 out.go:177] * Starting "bridge-031000" primary control-plane node in "bridge-031000" cluster
	I0717 11:13:59.803357   10956 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:13:59.803374   10956 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:13:59.803383   10956 cache.go:56] Caching tarball of preloaded images
	I0717 11:13:59.803447   10956 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:13:59.803453   10956 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:13:59.803502   10956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/bridge-031000/config.json ...
	I0717 11:13:59.803514   10956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/bridge-031000/config.json: {Name:mkcff5ae96d0a694b6ef747da7f5b1f34361c206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:13:59.803732   10956 start.go:360] acquireMachinesLock for bridge-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:13:59.803764   10956 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "bridge-031000"
	I0717 11:13:59.803774   10956 start.go:93] Provisioning new machine with config: &{Name:bridge-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:13:59.803805   10956 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:13:59.815344   10956 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:13:59.831951   10956 start.go:159] libmachine.API.Create for "bridge-031000" (driver="qemu2")
	I0717 11:13:59.831976   10956 client.go:168] LocalClient.Create starting
	I0717 11:13:59.832038   10956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:13:59.832069   10956 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:59.832078   10956 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:59.832112   10956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:13:59.832135   10956 main.go:141] libmachine: Decoding PEM data...
	I0717 11:13:59.832144   10956 main.go:141] libmachine: Parsing certificate...
	I0717 11:13:59.832506   10956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:13:59.975454   10956 main.go:141] libmachine: Creating SSH key...
	I0717 11:14:00.361551   10956 main.go:141] libmachine: Creating Disk image...
	I0717 11:14:00.361563   10956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:14:00.361787   10956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2
	I0717 11:14:00.371922   10956 main.go:141] libmachine: STDOUT: 
	I0717 11:14:00.371951   10956 main.go:141] libmachine: STDERR: 
	I0717 11:14:00.372013   10956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2 +20000M
	I0717 11:14:00.380113   10956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:14:00.380129   10956 main.go:141] libmachine: STDERR: 
	I0717 11:14:00.380141   10956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2
	I0717 11:14:00.380146   10956 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:14:00.380160   10956 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:00.380188   10956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:a9:d0:6e:0e:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2
	I0717 11:14:00.381898   10956 main.go:141] libmachine: STDOUT: 
	I0717 11:14:00.381914   10956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:00.381932   10956 client.go:171] duration metric: took 549.956291ms to LocalClient.Create
	I0717 11:14:02.384015   10956 start.go:128] duration metric: took 2.580215625s to createHost
	I0717 11:14:02.384062   10956 start.go:83] releasing machines lock for "bridge-031000", held for 2.580308833s
	W0717 11:14:02.384089   10956 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:02.398151   10956 out.go:177] * Deleting "bridge-031000" in qemu2 ...
	W0717 11:14:02.415802   10956 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:02.415816   10956 start.go:729] Will try again in 5 seconds ...
	I0717 11:14:07.418000   10956 start.go:360] acquireMachinesLock for bridge-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:07.418341   10956 start.go:364] duration metric: took 236.208µs to acquireMachinesLock for "bridge-031000"
	I0717 11:14:07.418449   10956 start.go:93] Provisioning new machine with config: &{Name:bridge-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.2 ClusterName:bridge-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:07.418638   10956 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:14:07.435101   10956 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:14:07.484801   10956 start.go:159] libmachine.API.Create for "bridge-031000" (driver="qemu2")
	I0717 11:14:07.484855   10956 client.go:168] LocalClient.Create starting
	I0717 11:14:07.484982   10956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:14:07.485053   10956 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:07.485069   10956 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:07.485136   10956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:14:07.485180   10956 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:07.485197   10956 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:07.485760   10956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:14:07.641507   10956 main.go:141] libmachine: Creating SSH key...
	I0717 11:14:07.736267   10956 main.go:141] libmachine: Creating Disk image...
	I0717 11:14:07.736275   10956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:14:07.736460   10956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2
	I0717 11:14:07.745618   10956 main.go:141] libmachine: STDOUT: 
	I0717 11:14:07.745639   10956 main.go:141] libmachine: STDERR: 
	I0717 11:14:07.745690   10956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2 +20000M
	I0717 11:14:07.753627   10956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:14:07.753652   10956 main.go:141] libmachine: STDERR: 
	I0717 11:14:07.753672   10956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2
	I0717 11:14:07.753678   10956 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:14:07.753685   10956 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:07.753722   10956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a3:f6:58:24:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/bridge-031000/disk.qcow2
	I0717 11:14:07.755520   10956 main.go:141] libmachine: STDOUT: 
	I0717 11:14:07.755535   10956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:07.755549   10956 client.go:171] duration metric: took 270.690083ms to LocalClient.Create
	I0717 11:14:09.757868   10956 start.go:128] duration metric: took 2.339173959s to createHost
	I0717 11:14:09.757990   10956 start.go:83] releasing machines lock for "bridge-031000", held for 2.339646917s
	W0717 11:14:09.758353   10956 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:09.769295   10956 out.go:177] 
	W0717 11:14:09.773567   10956 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:14:09.773620   10956 out.go:239] * 
	* 
	W0717 11:14:09.776020   10956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:14:09.784494   10956 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-031000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.987953125s)

                                                
                                                
-- stdout --
	* [kubenet-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-031000" primary control-plane node in "kubenet-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:14:11.962907   11076 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:14:11.963056   11076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:11.963060   11076 out.go:304] Setting ErrFile to fd 2...
	I0717 11:14:11.963062   11076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:11.963194   11076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:14:11.964381   11076 out.go:298] Setting JSON to false
	I0717 11:14:11.981275   11076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6219,"bootTime":1721233832,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:14:11.981358   11076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:14:11.987186   11076 out.go:177] * [kubenet-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:14:11.995127   11076 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:14:11.995179   11076 notify.go:220] Checking for updates...
	I0717 11:14:12.002200   11076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:14:12.005121   11076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:14:12.008137   11076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:14:12.011190   11076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:14:12.014113   11076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:14:12.017445   11076 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:14:12.017515   11076 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:14:12.017566   11076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:14:12.022209   11076 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:14:12.029125   11076 start.go:297] selected driver: qemu2
	I0717 11:14:12.029133   11076 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:14:12.029141   11076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:14:12.031524   11076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:14:12.034160   11076 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:14:12.035570   11076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:14:12.035585   11076 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0717 11:14:12.035615   11076 start.go:340] cluster config:
	{Name:kubenet-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kubenet-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:14:12.039581   11076 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:12.048229   11076 out.go:177] * Starting "kubenet-031000" primary control-plane node in "kubenet-031000" cluster
	I0717 11:14:12.052075   11076 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:14:12.052098   11076 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:14:12.052109   11076 cache.go:56] Caching tarball of preloaded images
	I0717 11:14:12.052170   11076 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:14:12.052186   11076 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:14:12.052253   11076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/kubenet-031000/config.json ...
	I0717 11:14:12.052273   11076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/kubenet-031000/config.json: {Name:mke0618196f2ee0c81050f1a24ab532723d52ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:14:12.052479   11076 start.go:360] acquireMachinesLock for kubenet-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:12.052514   11076 start.go:364] duration metric: took 29.042µs to acquireMachinesLock for "kubenet-031000"
	I0717 11:14:12.052524   11076 start.go:93] Provisioning new machine with config: &{Name:kubenet-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:12.052550   11076 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:14:12.055170   11076 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:14:12.071120   11076 start.go:159] libmachine.API.Create for "kubenet-031000" (driver="qemu2")
	I0717 11:14:12.071144   11076 client.go:168] LocalClient.Create starting
	I0717 11:14:12.071203   11076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:14:12.071232   11076 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:12.071241   11076 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:12.071275   11076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:14:12.071299   11076 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:12.071307   11076 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:12.071664   11076 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:14:12.214447   11076 main.go:141] libmachine: Creating SSH key...
	I0717 11:14:12.545559   11076 main.go:141] libmachine: Creating Disk image...
	I0717 11:14:12.545574   11076 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:14:12.545760   11076 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2
	I0717 11:14:12.555251   11076 main.go:141] libmachine: STDOUT: 
	I0717 11:14:12.555280   11076 main.go:141] libmachine: STDERR: 
	I0717 11:14:12.555351   11076 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2 +20000M
	I0717 11:14:12.563673   11076 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:14:12.563686   11076 main.go:141] libmachine: STDERR: 
	I0717 11:14:12.563702   11076 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2
	I0717 11:14:12.563710   11076 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:14:12.563721   11076 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:12.563753   11076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:31:12:b3:3c:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2
	I0717 11:14:12.565419   11076 main.go:141] libmachine: STDOUT: 
	I0717 11:14:12.565432   11076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:12.565451   11076 client.go:171] duration metric: took 494.306917ms to LocalClient.Create
	I0717 11:14:14.567327   11076 start.go:128] duration metric: took 2.514783667s to createHost
	I0717 11:14:14.567371   11076 start.go:83] releasing machines lock for "kubenet-031000", held for 2.51486975s
	W0717 11:14:14.567390   11076 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:14.572196   11076 out.go:177] * Deleting "kubenet-031000" in qemu2 ...
	W0717 11:14:14.591669   11076 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:14.591694   11076 start.go:729] Will try again in 5 seconds ...
	I0717 11:14:19.592509   11076 start.go:360] acquireMachinesLock for kubenet-031000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:19.593138   11076 start.go:364] duration metric: took 518.125µs to acquireMachinesLock for "kubenet-031000"
	I0717 11:14:19.593303   11076 start.go:93] Provisioning new machine with config: &{Name:kubenet-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:kubenet-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:19.593571   11076 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:14:19.604411   11076 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 11:14:19.653336   11076 start.go:159] libmachine.API.Create for "kubenet-031000" (driver="qemu2")
	I0717 11:14:19.653401   11076 client.go:168] LocalClient.Create starting
	I0717 11:14:19.653525   11076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:14:19.653589   11076 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:19.653607   11076 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:19.653678   11076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:14:19.653724   11076 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:19.653740   11076 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:19.654459   11076 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:14:19.825364   11076 main.go:141] libmachine: Creating SSH key...
	I0717 11:14:19.866252   11076 main.go:141] libmachine: Creating Disk image...
	I0717 11:14:19.866259   11076 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:14:19.866428   11076 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2
	I0717 11:14:19.875537   11076 main.go:141] libmachine: STDOUT: 
	I0717 11:14:19.875558   11076 main.go:141] libmachine: STDERR: 
	I0717 11:14:19.875613   11076 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2 +20000M
	I0717 11:14:19.883654   11076 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:14:19.883668   11076 main.go:141] libmachine: STDERR: 
	I0717 11:14:19.883683   11076 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2
	I0717 11:14:19.883687   11076 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:14:19.883697   11076 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:19.883728   11076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:84:18:48:eb:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/kubenet-031000/disk.qcow2
	I0717 11:14:19.885460   11076 main.go:141] libmachine: STDOUT: 
	I0717 11:14:19.885472   11076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:19.885483   11076 client.go:171] duration metric: took 232.078292ms to LocalClient.Create
	I0717 11:14:21.887572   11076 start.go:128] duration metric: took 2.293979208s to createHost
	I0717 11:14:21.887586   11076 start.go:83] releasing machines lock for "kubenet-031000", held for 2.294440583s
	W0717 11:14:21.887684   11076 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:21.897884   11076 out.go:177] 
	W0717 11:14:21.901776   11076 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:14:21.901787   11076 out.go:239] * 
	* 
	W0717 11:14:21.902264   11076 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:14:21.912849   11076 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-981000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-981000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.72388975s)

                                                
                                                
-- stdout --
	* [old-k8s-version-981000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-981000" primary control-plane node in "old-k8s-version-981000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-981000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:14:24.063672   11195 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:14:24.063818   11195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:24.063824   11195 out.go:304] Setting ErrFile to fd 2...
	I0717 11:14:24.063827   11195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:24.063975   11195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:14:24.065096   11195 out.go:298] Setting JSON to false
	I0717 11:14:24.081341   11195 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6232,"bootTime":1721233832,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:14:24.081441   11195 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:14:24.087118   11195 out.go:177] * [old-k8s-version-981000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:14:24.095079   11195 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:14:24.095150   11195 notify.go:220] Checking for updates...
	I0717 11:14:24.102028   11195 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:14:24.105038   11195 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:14:24.106436   11195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:14:24.109073   11195 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:14:24.111997   11195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:14:24.115531   11195 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:14:24.115604   11195 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:14:24.115656   11195 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:14:24.119958   11195 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:14:24.127129   11195 start.go:297] selected driver: qemu2
	I0717 11:14:24.127137   11195 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:14:24.127145   11195 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:14:24.129413   11195 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:14:24.132034   11195 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:14:24.135123   11195 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:14:24.135145   11195 cni.go:84] Creating CNI manager for ""
	I0717 11:14:24.135154   11195 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 11:14:24.135184   11195 start.go:340] cluster config:
	{Name:old-k8s-version-981000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:14:24.138894   11195 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:24.147029   11195 out.go:177] * Starting "old-k8s-version-981000" primary control-plane node in "old-k8s-version-981000" cluster
	I0717 11:14:24.151047   11195 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 11:14:24.151061   11195 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 11:14:24.151077   11195 cache.go:56] Caching tarball of preloaded images
	I0717 11:14:24.151139   11195 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:14:24.151144   11195 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 11:14:24.151248   11195 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/old-k8s-version-981000/config.json ...
	I0717 11:14:24.151263   11195 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/old-k8s-version-981000/config.json: {Name:mk1a02ead2d2f7d022d9973d3b72815353bb277c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:14:24.151481   11195 start.go:360] acquireMachinesLock for old-k8s-version-981000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:24.151514   11195 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "old-k8s-version-981000"
	I0717 11:14:24.151525   11195 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:24.151551   11195 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:14:24.159078   11195 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:14:24.177046   11195 start.go:159] libmachine.API.Create for "old-k8s-version-981000" (driver="qemu2")
	I0717 11:14:24.177079   11195 client.go:168] LocalClient.Create starting
	I0717 11:14:24.177183   11195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:14:24.177218   11195 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:24.177228   11195 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:24.177265   11195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:14:24.177287   11195 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:24.177293   11195 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:24.177690   11195 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:14:24.321129   11195 main.go:141] libmachine: Creating SSH key...
	I0717 11:14:24.355125   11195 main.go:141] libmachine: Creating Disk image...
	I0717 11:14:24.355130   11195 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:14:24.355274   11195 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2
	I0717 11:14:24.365042   11195 main.go:141] libmachine: STDOUT: 
	I0717 11:14:24.365059   11195 main.go:141] libmachine: STDERR: 
	I0717 11:14:24.365128   11195 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2 +20000M
	I0717 11:14:24.373219   11195 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:14:24.373232   11195 main.go:141] libmachine: STDERR: 
	I0717 11:14:24.373248   11195 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2
	I0717 11:14:24.373251   11195 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:14:24.373268   11195 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:24.373304   11195 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:4a:5e:c7:5d:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2
	I0717 11:14:24.375012   11195 main.go:141] libmachine: STDOUT: 
	I0717 11:14:24.375029   11195 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:24.375047   11195 client.go:171] duration metric: took 197.965666ms to LocalClient.Create
	I0717 11:14:26.377218   11195 start.go:128] duration metric: took 2.225661125s to createHost
	I0717 11:14:26.377288   11195 start.go:83] releasing machines lock for "old-k8s-version-981000", held for 2.225780208s
	W0717 11:14:26.377397   11195 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:26.387227   11195 out.go:177] * Deleting "old-k8s-version-981000" in qemu2 ...
	W0717 11:14:26.410583   11195 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:26.410618   11195 start.go:729] Will try again in 5 seconds ...
	I0717 11:14:31.412734   11195 start.go:360] acquireMachinesLock for old-k8s-version-981000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:31.413091   11195 start.go:364] duration metric: took 265.833µs to acquireMachinesLock for "old-k8s-version-981000"
	I0717 11:14:31.413182   11195 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:31.413398   11195 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:14:31.424715   11195 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:14:31.459469   11195 start.go:159] libmachine.API.Create for "old-k8s-version-981000" (driver="qemu2")
	I0717 11:14:31.459506   11195 client.go:168] LocalClient.Create starting
	I0717 11:14:31.459585   11195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:14:31.459631   11195 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:31.459644   11195 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:31.459679   11195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:14:31.459702   11195 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:31.459708   11195 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:31.460073   11195 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:14:31.601744   11195 main.go:141] libmachine: Creating SSH key...
	I0717 11:14:31.697306   11195 main.go:141] libmachine: Creating Disk image...
	I0717 11:14:31.697316   11195 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:14:31.697524   11195 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2
	I0717 11:14:31.706844   11195 main.go:141] libmachine: STDOUT: 
	I0717 11:14:31.706865   11195 main.go:141] libmachine: STDERR: 
	I0717 11:14:31.706917   11195 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2 +20000M
	I0717 11:14:31.714997   11195 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:14:31.715012   11195 main.go:141] libmachine: STDERR: 
	I0717 11:14:31.715025   11195 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2
	I0717 11:14:31.715031   11195 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:14:31.715042   11195 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:31.715073   11195 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b8:ce:a8:5f:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2
	I0717 11:14:31.716843   11195 main.go:141] libmachine: STDOUT: 
	I0717 11:14:31.716856   11195 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:31.716868   11195 client.go:171] duration metric: took 257.360208ms to LocalClient.Create
	I0717 11:14:33.719053   11195 start.go:128] duration metric: took 2.305636375s to createHost
	I0717 11:14:33.719126   11195 start.go:83] releasing machines lock for "old-k8s-version-981000", held for 2.30603s
	W0717 11:14:33.719525   11195 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:33.732186   11195 out.go:177] 
	W0717 11:14:33.735181   11195 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:14:33.735205   11195 out.go:239] * 
	* 
	W0717 11:14:33.737024   11195 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:14:33.747154   11195 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-981000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (64.843084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-981000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-981000 create -f testdata/busybox.yaml: exit status 1 (32.176208ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-981000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-981000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (28.932875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (28.74675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-981000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-981000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-981000 describe deploy/metrics-server -n kube-system: exit status 1 (27.206333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-981000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-981000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (28.816708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-981000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-981000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.179151167s)

                                                
                                                
-- stdout --
	* [old-k8s-version-981000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-981000" primary control-plane node in "old-k8s-version-981000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-981000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-981000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:14:37.757184   11250 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:14:37.757333   11250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:37.757336   11250 out.go:304] Setting ErrFile to fd 2...
	I0717 11:14:37.757339   11250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:37.757475   11250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:14:37.758543   11250 out.go:298] Setting JSON to false
	I0717 11:14:37.775195   11250 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6245,"bootTime":1721233832,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:14:37.775278   11250 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:14:37.779975   11250 out.go:177] * [old-k8s-version-981000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:14:37.785993   11250 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:14:37.786060   11250 notify.go:220] Checking for updates...
	I0717 11:14:37.792888   11250 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:14:37.795896   11250 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:14:37.798952   11250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:14:37.801880   11250 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:14:37.804895   11250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:14:37.808186   11250 config.go:182] Loaded profile config "old-k8s-version-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0717 11:14:37.809881   11250 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 11:14:37.812892   11250 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:14:37.816890   11250 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:14:37.821906   11250 start.go:297] selected driver: qemu2
	I0717 11:14:37.821916   11250 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:14:37.821977   11250 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:14:37.824324   11250 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:14:37.824368   11250 cni.go:84] Creating CNI manager for ""
	I0717 11:14:37.824375   11250 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 11:14:37.824403   11250 start.go:340] cluster config:
	{Name:old-k8s-version-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-981000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:14:37.827945   11250 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:37.835854   11250 out.go:177] * Starting "old-k8s-version-981000" primary control-plane node in "old-k8s-version-981000" cluster
	I0717 11:14:37.838894   11250 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 11:14:37.838905   11250 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 11:14:37.838914   11250 cache.go:56] Caching tarball of preloaded images
	I0717 11:14:37.838963   11250 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:14:37.838969   11250 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 11:14:37.839014   11250 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/old-k8s-version-981000/config.json ...
	I0717 11:14:37.839463   11250 start.go:360] acquireMachinesLock for old-k8s-version-981000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:37.839496   11250 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "old-k8s-version-981000"
	I0717 11:14:37.839504   11250 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:14:37.839510   11250 fix.go:54] fixHost starting: 
	I0717 11:14:37.839625   11250 fix.go:112] recreateIfNeeded on old-k8s-version-981000: state=Stopped err=<nil>
	W0717 11:14:37.839632   11250 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:14:37.843855   11250 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-981000" ...
	I0717 11:14:37.851757   11250 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:37.851792   11250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b8:ce:a8:5f:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2
	I0717 11:14:37.853715   11250 main.go:141] libmachine: STDOUT: 
	I0717 11:14:37.853730   11250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:37.853755   11250 fix.go:56] duration metric: took 14.244792ms for fixHost
	I0717 11:14:37.853759   11250 start.go:83] releasing machines lock for "old-k8s-version-981000", held for 14.2585ms
	W0717 11:14:37.853766   11250 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:14:37.853798   11250 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:37.853803   11250 start.go:729] Will try again in 5 seconds ...
	I0717 11:14:42.855987   11250 start.go:360] acquireMachinesLock for old-k8s-version-981000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:42.856157   11250 start.go:364] duration metric: took 119.791µs to acquireMachinesLock for "old-k8s-version-981000"
	I0717 11:14:42.856226   11250 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:14:42.856237   11250 fix.go:54] fixHost starting: 
	I0717 11:14:42.856544   11250 fix.go:112] recreateIfNeeded on old-k8s-version-981000: state=Stopped err=<nil>
	W0717 11:14:42.856556   11250 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:14:42.864960   11250 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-981000" ...
	I0717 11:14:42.868738   11250 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:42.868990   11250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b8:ce:a8:5f:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/old-k8s-version-981000/disk.qcow2
	I0717 11:14:42.876218   11250 main.go:141] libmachine: STDOUT: 
	I0717 11:14:42.876265   11250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:42.876345   11250 fix.go:56] duration metric: took 20.106417ms for fixHost
	I0717 11:14:42.876359   11250 start.go:83] releasing machines lock for "old-k8s-version-981000", held for 20.187ms
	W0717 11:14:42.876540   11250 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-981000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-981000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:42.883768   11250 out.go:177] 
	W0717 11:14:42.887842   11250 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:14:42.887862   11250 out.go:239] * 
	* 
	W0717 11:14:42.889271   11250 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:14:42.896789   11250 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-981000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (52.912459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-981000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (31.239625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-981000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.690667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-981000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (29.629667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-981000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (28.373041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-981000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-981000 --alsologtostderr -v=1: exit status 83 (41.689875ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-981000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-981000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:14:43.145566   11278 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:14:43.146485   11278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:43.146489   11278 out.go:304] Setting ErrFile to fd 2...
	I0717 11:14:43.146491   11278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:43.146620   11278 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:14:43.146828   11278 out.go:298] Setting JSON to false
	I0717 11:14:43.146834   11278 mustload.go:65] Loading cluster: old-k8s-version-981000
	I0717 11:14:43.147028   11278 config.go:182] Loaded profile config "old-k8s-version-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0717 11:14:43.151851   11278 out.go:177] * The control-plane node old-k8s-version-981000 host is not running: state=Stopped
	I0717 11:14:43.155844   11278 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-981000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-981000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (28.870625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (29.054083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.965888084s)

                                                
                                                
-- stdout --
	* [no-preload-112000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-112000" primary control-plane node in "no-preload-112000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-112000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:14:43.453253   11295 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:14:43.453390   11295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:43.453393   11295 out.go:304] Setting ErrFile to fd 2...
	I0717 11:14:43.453396   11295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:43.453539   11295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:14:43.454730   11295 out.go:298] Setting JSON to false
	I0717 11:14:43.471186   11295 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6251,"bootTime":1721233832,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:14:43.471252   11295 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:14:43.476052   11295 out.go:177] * [no-preload-112000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:14:43.483071   11295 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:14:43.483155   11295 notify.go:220] Checking for updates...
	I0717 11:14:43.489125   11295 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:14:43.492034   11295 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:14:43.495079   11295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:14:43.498099   11295 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:14:43.501083   11295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:14:43.504384   11295 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:14:43.504470   11295 config.go:182] Loaded profile config "stopped-upgrade-018000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0717 11:14:43.504515   11295 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:14:43.508013   11295 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:14:43.514018   11295 start.go:297] selected driver: qemu2
	I0717 11:14:43.514029   11295 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:14:43.514038   11295 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:14:43.516298   11295 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:14:43.518983   11295 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:14:43.522156   11295 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:14:43.522171   11295 cni.go:84] Creating CNI manager for ""
	I0717 11:14:43.522176   11295 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:14:43.522180   11295 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:14:43.522207   11295 start.go:340] cluster config:
	{Name:no-preload-112000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-112000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:14:43.525598   11295 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.534018   11295 out.go:177] * Starting "no-preload-112000" primary control-plane node in "no-preload-112000" cluster
	I0717 11:14:43.538046   11295 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 11:14:43.538111   11295 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/no-preload-112000/config.json ...
	I0717 11:14:43.538128   11295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/no-preload-112000/config.json: {Name:mkc544f1239f2d8e79b2f731c4870253ea887477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:14:43.538133   11295 cache.go:107] acquiring lock: {Name:mk37ecf4b84c0a96dc795321c2d0379c9a5f9bf4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.538132   11295 cache.go:107] acquiring lock: {Name:mk8ab9d8b3d5d0483298afba412cefc4778e312f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.538195   11295 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 11:14:43.538205   11295 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.5µs
	I0717 11:14:43.538211   11295 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 11:14:43.538223   11295 cache.go:107] acquiring lock: {Name:mke471489e6d5ba45ccf3a2c36f115c3880838ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.538264   11295 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 11:14:43.538298   11295 cache.go:107] acquiring lock: {Name:mk6311f2d7c1a80de7aac5f5cdf1c2df0c9aafc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.538282   11295 cache.go:107] acquiring lock: {Name:mk88673340b0e049355f4e666b5951dcf7cb03dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.538300   11295 cache.go:107] acquiring lock: {Name:mk3257cc3beb0ffb2b7750d3d394fa8dd485dd0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.538349   11295 cache.go:107] acquiring lock: {Name:mkcdd6e49a5c341a219db9d7a73b28e0e07413f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.538406   11295 start.go:360] acquireMachinesLock for no-preload-112000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:43.538396   11295 cache.go:107] acquiring lock: {Name:mk5dbaf18fcf001b0f5f192be384c74c83ff75a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:43.538438   11295 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 11:14:43.538445   11295 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 11:14:43.538451   11295 start.go:364] duration metric: took 36.459µs to acquireMachinesLock for "no-preload-112000"
	I0717 11:14:43.538473   11295 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 11:14:43.538513   11295 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 11:14:43.538588   11295 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 11:14:43.538609   11295 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 11:14:43.538484   11295 start.go:93] Provisioning new machine with config: &{Name:no-preload-112000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-112000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:43.538649   11295 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:14:43.546929   11295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:14:43.551445   11295 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 11:14:43.551469   11295 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 11:14:43.551505   11295 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 11:14:43.551520   11295 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 11:14:43.553717   11295 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 11:14:43.553928   11295 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 11:14:43.554052   11295 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 11:14:43.563115   11295 start.go:159] libmachine.API.Create for "no-preload-112000" (driver="qemu2")
	I0717 11:14:43.563137   11295 client.go:168] LocalClient.Create starting
	I0717 11:14:43.563198   11295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:14:43.563229   11295 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:43.563240   11295 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:43.563275   11295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:14:43.563298   11295 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:43.563306   11295 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:43.563619   11295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:14:43.706635   11295 main.go:141] libmachine: Creating SSH key...
	I0717 11:14:43.948896   11295 main.go:141] libmachine: Creating Disk image...
	I0717 11:14:43.948910   11295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:14:43.949072   11295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2
	I0717 11:14:43.954232   11295 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 11:14:43.958451   11295 main.go:141] libmachine: STDOUT: 
	I0717 11:14:43.958461   11295 main.go:141] libmachine: STDERR: 
	I0717 11:14:43.958511   11295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2 +20000M
	I0717 11:14:43.966753   11295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:14:43.966776   11295 main.go:141] libmachine: STDERR: 
	I0717 11:14:43.966794   11295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2
	I0717 11:14:43.966799   11295 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:14:43.966812   11295 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:43.966844   11295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:60:ca:07:80:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2
	I0717 11:14:43.967895   11295 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0717 11:14:43.968625   11295 main.go:141] libmachine: STDOUT: 
	I0717 11:14:43.968635   11295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:43.968651   11295 client.go:171] duration metric: took 405.513833ms to LocalClient.Create
	I0717 11:14:43.984226   11295 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 11:14:43.996932   11295 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0717 11:14:44.029626   11295 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 11:14:44.031621   11295 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 11:14:44.069816   11295 cache.go:162] opening:  /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 11:14:44.115701   11295 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0717 11:14:44.115711   11295 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 577.491167ms
	I0717 11:14:44.115720   11295 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0717 11:14:45.968703   11295 start.go:128] duration metric: took 2.43005975s to createHost
	I0717 11:14:45.968720   11295 start.go:83] releasing machines lock for "no-preload-112000", held for 2.430278125s
	W0717 11:14:45.968740   11295 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:45.973311   11295 out.go:177] * Deleting "no-preload-112000" in qemu2 ...
	W0717 11:14:45.983641   11295 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:45.983653   11295 start.go:729] Will try again in 5 seconds ...
	I0717 11:14:47.619687   11295 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0717 11:14:47.619716   11295 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.081500125s
	I0717 11:14:47.619730   11295 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0717 11:14:47.638026   11295 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0717 11:14:47.638040   11295 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.099736083s
	I0717 11:14:47.638049   11295 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0717 11:14:47.754188   11295 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0717 11:14:47.754219   11295 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.216118459s
	I0717 11:14:47.754230   11295 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0717 11:14:47.839762   11295 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0717 11:14:47.839791   11295 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 4.301535875s
	I0717 11:14:47.839809   11295 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0717 11:14:48.078978   11295 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0717 11:14:48.079009   11295 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.540797875s
	I0717 11:14:48.079021   11295 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0717 11:14:50.983796   11295 start.go:360] acquireMachinesLock for no-preload-112000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:50.984243   11295 start.go:364] duration metric: took 375.708µs to acquireMachinesLock for "no-preload-112000"
	I0717 11:14:50.984359   11295 start.go:93] Provisioning new machine with config: &{Name:no-preload-112000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-112000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:14:50.984564   11295 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:14:50.993057   11295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:14:51.036745   11295 start.go:159] libmachine.API.Create for "no-preload-112000" (driver="qemu2")
	I0717 11:14:51.036795   11295 client.go:168] LocalClient.Create starting
	I0717 11:14:51.036922   11295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:14:51.036990   11295 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:51.037031   11295 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:51.037103   11295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:14:51.037143   11295 main.go:141] libmachine: Decoding PEM data...
	I0717 11:14:51.037160   11295 main.go:141] libmachine: Parsing certificate...
	I0717 11:14:51.037623   11295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:14:51.195489   11295 main.go:141] libmachine: Creating SSH key...
	I0717 11:14:51.334767   11295 main.go:141] libmachine: Creating Disk image...
	I0717 11:14:51.334775   11295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:14:51.334954   11295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2
	I0717 11:14:51.345176   11295 main.go:141] libmachine: STDOUT: 
	I0717 11:14:51.345202   11295 main.go:141] libmachine: STDERR: 
	I0717 11:14:51.345265   11295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2 +20000M
	I0717 11:14:51.353898   11295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:14:51.353914   11295 main.go:141] libmachine: STDERR: 
	I0717 11:14:51.353926   11295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2
	I0717 11:14:51.353931   11295 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:14:51.353943   11295 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:51.353978   11295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d7:f7:b7:30:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2
	I0717 11:14:51.355785   11295 main.go:141] libmachine: STDOUT: 
	I0717 11:14:51.355800   11295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:51.355813   11295 client.go:171] duration metric: took 319.015459ms to LocalClient.Create
	I0717 11:14:53.170614   11295 cache.go:157] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0717 11:14:53.170682   11295 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 9.632481917s
	I0717 11:14:53.170727   11295 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0717 11:14:53.170814   11295 cache.go:87] Successfully saved all images to host disk.
	I0717 11:14:53.357969   11295 start.go:128] duration metric: took 2.373395334s to createHost
	I0717 11:14:53.358021   11295 start.go:83] releasing machines lock for "no-preload-112000", held for 2.373776375s
	W0717 11:14:53.358279   11295 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-112000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-112000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:53.366768   11295 out.go:177] 
	W0717 11:14:53.371833   11295 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:14:53.371848   11295 out.go:239] * 
	* 
	W0717 11:14:53.373113   11295 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:14:53.382726   11295 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (39.434209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-112000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-112000 create -f testdata/busybox.yaml: exit status 1 (28.09025ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-112000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-112000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (29.427625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (29.183292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-112000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-112000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-112000 describe deploy/metrics-server -n kube-system: exit status 1 (27.129667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-112000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-112000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (28.579167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.180206541s)

                                                
                                                
-- stdout --
	* [no-preload-112000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-112000" primary control-plane node in "no-preload-112000" cluster
	* Restarting existing qemu2 VM for "no-preload-112000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-112000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:14:57.011934   11388 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:14:57.012052   11388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:57.012055   11388 out.go:304] Setting ErrFile to fd 2...
	I0717 11:14:57.012058   11388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:14:57.012204   11388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:14:57.013210   11388 out.go:298] Setting JSON to false
	I0717 11:14:57.029263   11388 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6265,"bootTime":1721233832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:14:57.029369   11388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:14:57.034065   11388 out.go:177] * [no-preload-112000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:14:57.041070   11388 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:14:57.041120   11388 notify.go:220] Checking for updates...
	I0717 11:14:57.045986   11388 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:14:57.049070   11388 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:14:57.052028   11388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:14:57.054973   11388 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:14:57.061989   11388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:14:57.065180   11388 config.go:182] Loaded profile config "no-preload-112000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0717 11:14:57.065438   11388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:14:57.069987   11388 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:14:57.076924   11388 start.go:297] selected driver: qemu2
	I0717 11:14:57.076929   11388 start.go:901] validating driver "qemu2" against &{Name:no-preload-112000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-112000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:14:57.077003   11388 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:14:57.079224   11388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:14:57.079248   11388 cni.go:84] Creating CNI manager for ""
	I0717 11:14:57.079255   11388 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:14:57.079282   11388 start.go:340] cluster config:
	{Name:no-preload-112000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-112000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:14:57.082625   11388 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.089904   11388 out.go:177] * Starting "no-preload-112000" primary control-plane node in "no-preload-112000" cluster
	I0717 11:14:57.093979   11388 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 11:14:57.094055   11388 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/no-preload-112000/config.json ...
	I0717 11:14:57.094101   11388 cache.go:107] acquiring lock: {Name:mk6311f2d7c1a80de7aac5f5cdf1c2df0c9aafc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.094102   11388 cache.go:107] acquiring lock: {Name:mk37ecf4b84c0a96dc795321c2d0379c9a5f9bf4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.094149   11388 cache.go:107] acquiring lock: {Name:mk8ab9d8b3d5d0483298afba412cefc4778e312f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.094167   11388 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 11:14:57.094172   11388 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 71.042µs
	I0717 11:14:57.094175   11388 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0717 11:14:57.094183   11388 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 11:14:57.094182   11388 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 83.416µs
	I0717 11:14:57.094187   11388 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0717 11:14:57.094188   11388 cache.go:107] acquiring lock: {Name:mk88673340b0e049355f4e666b5951dcf7cb03dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.094194   11388 cache.go:107] acquiring lock: {Name:mkcdd6e49a5c341a219db9d7a73b28e0e07413f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.094208   11388 cache.go:107] acquiring lock: {Name:mk5dbaf18fcf001b0f5f192be384c74c83ff75a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.094222   11388 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0717 11:14:57.094226   11388 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 38.041µs
	I0717 11:14:57.094229   11388 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0717 11:14:57.094236   11388 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0717 11:14:57.094240   11388 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 112.333µs
	I0717 11:14:57.094243   11388 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0717 11:14:57.094243   11388 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0717 11:14:57.094244   11388 cache.go:107] acquiring lock: {Name:mk3257cc3beb0ffb2b7750d3d394fa8dd485dd0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.094248   11388 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 41.125µs
	I0717 11:14:57.094252   11388 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0717 11:14:57.094273   11388 cache.go:107] acquiring lock: {Name:mke471489e6d5ba45ccf3a2c36f115c3880838ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:14:57.094288   11388 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0717 11:14:57.094295   11388 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 82.75µs
	I0717 11:14:57.094302   11388 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0717 11:14:57.094321   11388 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0717 11:14:57.094325   11388 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 97.208µs
	I0717 11:14:57.094331   11388 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0717 11:14:57.094360   11388 cache.go:115] /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0717 11:14:57.094367   11388 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 172.792µs
	I0717 11:14:57.094371   11388 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0717 11:14:57.094375   11388 cache.go:87] Successfully saved all images to host disk.
	I0717 11:14:57.094462   11388 start.go:360] acquireMachinesLock for no-preload-112000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:14:57.094500   11388 start.go:364] duration metric: took 32.834µs to acquireMachinesLock for "no-preload-112000"
	I0717 11:14:57.094508   11388 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:14:57.094512   11388 fix.go:54] fixHost starting: 
	I0717 11:14:57.094616   11388 fix.go:112] recreateIfNeeded on no-preload-112000: state=Stopped err=<nil>
	W0717 11:14:57.094624   11388 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:14:57.102986   11388 out.go:177] * Restarting existing qemu2 VM for "no-preload-112000" ...
	I0717 11:14:57.107017   11388 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:14:57.107053   11388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d7:f7:b7:30:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2
	I0717 11:14:57.108914   11388 main.go:141] libmachine: STDOUT: 
	I0717 11:14:57.108931   11388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:14:57.108957   11388 fix.go:56] duration metric: took 14.444458ms for fixHost
	I0717 11:14:57.108960   11388 start.go:83] releasing machines lock for "no-preload-112000", held for 14.457041ms
	W0717 11:14:57.108967   11388 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:14:57.108991   11388 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:14:57.108995   11388 start.go:729] Will try again in 5 seconds ...
	I0717 11:15:02.111242   11388 start.go:360] acquireMachinesLock for no-preload-112000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:02.111755   11388 start.go:364] duration metric: took 433.042µs to acquireMachinesLock for "no-preload-112000"
	I0717 11:15:02.111906   11388 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:15:02.111927   11388 fix.go:54] fixHost starting: 
	I0717 11:15:02.112691   11388 fix.go:112] recreateIfNeeded on no-preload-112000: state=Stopped err=<nil>
	W0717 11:15:02.112720   11388 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:15:02.117175   11388 out.go:177] * Restarting existing qemu2 VM for "no-preload-112000" ...
	I0717 11:15:02.120253   11388 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:02.120472   11388 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d7:f7:b7:30:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/no-preload-112000/disk.qcow2
	I0717 11:15:02.130541   11388 main.go:141] libmachine: STDOUT: 
	I0717 11:15:02.130603   11388 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:02.130680   11388 fix.go:56] duration metric: took 18.756ms for fixHost
	I0717 11:15:02.130696   11388 start.go:83] releasing machines lock for "no-preload-112000", held for 18.918541ms
	W0717 11:15:02.130861   11388 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-112000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-112000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:02.139062   11388 out.go:177] 
	W0717 11:15:02.142242   11388 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:02.142276   11388 out.go:239] * 
	* 
	W0717 11:15:02.143963   11388 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:15:02.156992   11388 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-112000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (55.834459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-112000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (29.9525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-112000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-112000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-112000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.62525ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-112000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-112000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (27.704334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-112000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (29.63025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-112000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-112000 --alsologtostderr -v=1: exit status 83 (41.372833ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-112000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-112000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:02.405348   11409 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:02.405495   11409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:02.405499   11409 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:02.405501   11409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:02.405640   11409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:02.405887   11409 out.go:298] Setting JSON to false
	I0717 11:15:02.405894   11409 mustload.go:65] Loading cluster: no-preload-112000
	I0717 11:15:02.406065   11409 config.go:182] Loaded profile config "no-preload-112000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0717 11:15:02.410947   11409 out.go:177] * The control-plane node no-preload-112000 host is not running: state=Stopped
	I0717 11:15:02.414943   11409 out.go:177]   To start a cluster, run: "minikube start -p no-preload-112000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-112000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (29.71575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (32.021959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-112000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-016000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-016000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (9.976800125s)

                                                
                                                
-- stdout --
	* [embed-certs-016000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-016000" primary control-plane node in "embed-certs-016000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-016000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:02.592361   11421 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:02.592503   11421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:02.592506   11421 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:02.592509   11421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:02.592657   11421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:02.593870   11421 out.go:298] Setting JSON to false
	I0717 11:15:02.611194   11421 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6270,"bootTime":1721233832,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:15:02.611302   11421 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:15:02.616997   11421 out.go:177] * [embed-certs-016000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:15:02.621980   11421 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:15:02.622009   11421 notify.go:220] Checking for updates...
	I0717 11:15:02.641889   11421 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:15:02.645881   11421 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:15:02.652916   11421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:15:02.659778   11421 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:15:02.667824   11421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:15:02.672250   11421 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:02.672305   11421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:15:02.678856   11421 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:15:02.685907   11421 start.go:297] selected driver: qemu2
	I0717 11:15:02.685920   11421 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:15:02.685928   11421 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:15:02.688184   11421 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:15:02.691903   11421 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:15:02.694980   11421 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:15:02.695019   11421 cni.go:84] Creating CNI manager for ""
	I0717 11:15:02.695026   11421 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:15:02.695034   11421 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:15:02.695058   11421 start.go:340] cluster config:
	{Name:embed-certs-016000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-016000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:02.698422   11421 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:15:02.706916   11421 out.go:177] * Starting "embed-certs-016000" primary control-plane node in "embed-certs-016000" cluster
	I0717 11:15:02.710867   11421 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:15:02.710913   11421 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:15:02.710924   11421 cache.go:56] Caching tarball of preloaded images
	I0717 11:15:02.711013   11421 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:15:02.711020   11421 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:15:02.711092   11421 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/embed-certs-016000/config.json ...
	I0717 11:15:02.711106   11421 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/embed-certs-016000/config.json: {Name:mk37b8f72116e0fa0f60b09d17e485e484ccfb63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:15:02.711313   11421 start.go:360] acquireMachinesLock for embed-certs-016000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:02.711351   11421 start.go:364] duration metric: took 32.25µs to acquireMachinesLock for "embed-certs-016000"
	I0717 11:15:02.711363   11421 start.go:93] Provisioning new machine with config: &{Name:embed-certs-016000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-016000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:15:02.711401   11421 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:15:02.718844   11421 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:15:02.735003   11421 start.go:159] libmachine.API.Create for "embed-certs-016000" (driver="qemu2")
	I0717 11:15:02.735037   11421 client.go:168] LocalClient.Create starting
	I0717 11:15:02.735128   11421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:15:02.735170   11421 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:02.735179   11421 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:02.735221   11421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:15:02.735244   11421 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:02.735252   11421 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:02.735621   11421 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:15:02.936004   11421 main.go:141] libmachine: Creating SSH key...
	I0717 11:15:03.014657   11421 main.go:141] libmachine: Creating Disk image...
	I0717 11:15:03.014667   11421 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:15:03.014818   11421 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2
	I0717 11:15:03.024109   11421 main.go:141] libmachine: STDOUT: 
	I0717 11:15:03.024128   11421 main.go:141] libmachine: STDERR: 
	I0717 11:15:03.024185   11421 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2 +20000M
	I0717 11:15:03.040304   11421 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:15:03.040320   11421 main.go:141] libmachine: STDERR: 
	I0717 11:15:03.040337   11421 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2
	I0717 11:15:03.040340   11421 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:15:03.040351   11421 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:03.040397   11421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:14:46:8a:62:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2
	I0717 11:15:03.042096   11421 main.go:141] libmachine: STDOUT: 
	I0717 11:15:03.042119   11421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:03.042136   11421 client.go:171] duration metric: took 307.098541ms to LocalClient.Create
	I0717 11:15:05.044296   11421 start.go:128] duration metric: took 2.332885917s to createHost
	I0717 11:15:05.044346   11421 start.go:83] releasing machines lock for "embed-certs-016000", held for 2.333001667s
	W0717 11:15:05.044411   11421 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:05.062514   11421 out.go:177] * Deleting "embed-certs-016000" in qemu2 ...
	W0717 11:15:05.080666   11421 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:05.080704   11421 start.go:729] Will try again in 5 seconds ...
	I0717 11:15:10.082975   11421 start.go:360] acquireMachinesLock for embed-certs-016000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:10.083418   11421 start.go:364] duration metric: took 342.375µs to acquireMachinesLock for "embed-certs-016000"
	I0717 11:15:10.083575   11421 start.go:93] Provisioning new machine with config: &{Name:embed-certs-016000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.2 ClusterName:embed-certs-016000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:15:10.083881   11421 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:15:10.093147   11421 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:15:10.142223   11421 start.go:159] libmachine.API.Create for "embed-certs-016000" (driver="qemu2")
	I0717 11:15:10.142268   11421 client.go:168] LocalClient.Create starting
	I0717 11:15:10.142370   11421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:15:10.142431   11421 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:10.142449   11421 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:10.142557   11421 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:15:10.142601   11421 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:10.142617   11421 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:10.143184   11421 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:15:10.288692   11421 main.go:141] libmachine: Creating SSH key...
	I0717 11:15:10.475676   11421 main.go:141] libmachine: Creating Disk image...
	I0717 11:15:10.475682   11421 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:15:10.475859   11421 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2
	I0717 11:15:10.485391   11421 main.go:141] libmachine: STDOUT: 
	I0717 11:15:10.485413   11421 main.go:141] libmachine: STDERR: 
	I0717 11:15:10.485481   11421 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2 +20000M
	I0717 11:15:10.493620   11421 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:15:10.493635   11421 main.go:141] libmachine: STDERR: 
	I0717 11:15:10.493647   11421 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2
	I0717 11:15:10.493656   11421 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:15:10.493668   11421 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:10.493694   11421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:13:1c:9c:05:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2
	I0717 11:15:10.495344   11421 main.go:141] libmachine: STDOUT: 
	I0717 11:15:10.495360   11421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:10.495373   11421 client.go:171] duration metric: took 353.102792ms to LocalClient.Create
	I0717 11:15:12.497532   11421 start.go:128] duration metric: took 2.413640542s to createHost
	I0717 11:15:12.497586   11421 start.go:83] releasing machines lock for "embed-certs-016000", held for 2.414157084s
	W0717 11:15:12.497940   11421 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-016000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-016000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:12.514467   11421 out.go:177] 
	W0717 11:15:12.520620   11421 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:12.520651   11421 out.go:239] * 
	* 
	W0717 11:15:12.522796   11421 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:15:12.530420   11421 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-016000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (49.164041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-636000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-636000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (12.068281833s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-636000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-636000" primary control-plane node in "default-k8s-diff-port-636000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-636000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:02.947237   11443 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:02.947365   11443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:02.947369   11443 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:02.947371   11443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:02.947497   11443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:02.948643   11443 out.go:298] Setting JSON to false
	I0717 11:15:02.965041   11443 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6270,"bootTime":1721233832,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:15:02.965141   11443 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:15:02.971049   11443 out.go:177] * [default-k8s-diff-port-636000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:15:02.979914   11443 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:15:02.979986   11443 notify.go:220] Checking for updates...
	I0717 11:15:02.986830   11443 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:15:02.989917   11443 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:15:02.992920   11443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:15:02.995937   11443 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:15:02.998884   11443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:15:03.002204   11443 config.go:182] Loaded profile config "embed-certs-016000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:03.002263   11443 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:03.002315   11443 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:15:03.005872   11443 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:15:03.012913   11443 start.go:297] selected driver: qemu2
	I0717 11:15:03.012919   11443 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:15:03.012924   11443 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:15:03.014929   11443 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 11:15:03.018893   11443 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:15:03.021965   11443 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:15:03.021982   11443 cni.go:84] Creating CNI manager for ""
	I0717 11:15:03.021989   11443 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:15:03.021992   11443 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:15:03.022016   11443 start.go:340] cluster config:
	{Name:default-k8s-diff-port-636000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:03.025472   11443 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:15:03.029924   11443 out.go:177] * Starting "default-k8s-diff-port-636000" primary control-plane node in "default-k8s-diff-port-636000" cluster
	I0717 11:15:03.039926   11443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:15:03.039952   11443 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:15:03.039963   11443 cache.go:56] Caching tarball of preloaded images
	I0717 11:15:03.040043   11443 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:15:03.040051   11443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:15:03.040113   11443 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/default-k8s-diff-port-636000/config.json ...
	I0717 11:15:03.040129   11443 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/default-k8s-diff-port-636000/config.json: {Name:mk9dcf48762c57d2e2b90ae9b55c44f1467f7f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:15:03.040299   11443 start.go:360] acquireMachinesLock for default-k8s-diff-port-636000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:05.044508   11443 start.go:364] duration metric: took 2.004200541s to acquireMachinesLock for "default-k8s-diff-port-636000"
	I0717 11:15:05.044598   11443 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:15:05.044865   11443 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:15:05.054480   11443 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:15:05.104792   11443 start.go:159] libmachine.API.Create for "default-k8s-diff-port-636000" (driver="qemu2")
	I0717 11:15:05.104844   11443 client.go:168] LocalClient.Create starting
	I0717 11:15:05.104977   11443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:15:05.105036   11443 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:05.105053   11443 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:05.105126   11443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:15:05.105175   11443 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:05.105189   11443 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:05.105917   11443 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:15:05.274556   11443 main.go:141] libmachine: Creating SSH key...
	I0717 11:15:05.347253   11443 main.go:141] libmachine: Creating Disk image...
	I0717 11:15:05.347261   11443 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:15:05.347433   11443 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2
	I0717 11:15:05.356587   11443 main.go:141] libmachine: STDOUT: 
	I0717 11:15:05.356602   11443 main.go:141] libmachine: STDERR: 
	I0717 11:15:05.356648   11443 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2 +20000M
	I0717 11:15:05.364434   11443 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:15:05.364447   11443 main.go:141] libmachine: STDERR: 
	I0717 11:15:05.364466   11443 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2
	I0717 11:15:05.364474   11443 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:15:05.364497   11443 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:05.364519   11443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:7c:a6:2d:f1:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2
	I0717 11:15:05.366122   11443 main.go:141] libmachine: STDOUT: 
	I0717 11:15:05.366135   11443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:05.366151   11443 client.go:171] duration metric: took 261.304583ms to LocalClient.Create
	I0717 11:15:07.368333   11443 start.go:128] duration metric: took 2.323453s to createHost
	I0717 11:15:07.368420   11443 start.go:83] releasing machines lock for "default-k8s-diff-port-636000", held for 2.323886625s
	W0717 11:15:07.368465   11443 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:07.376705   11443 out.go:177] * Deleting "default-k8s-diff-port-636000" in qemu2 ...
	W0717 11:15:07.407081   11443 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:07.407112   11443 start.go:729] Will try again in 5 seconds ...
	I0717 11:15:12.409287   11443 start.go:360] acquireMachinesLock for default-k8s-diff-port-636000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:12.497698   11443 start.go:364] duration metric: took 88.185208ms to acquireMachinesLock for "default-k8s-diff-port-636000"
	I0717 11:15:12.497847   11443 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:15:12.498096   11443 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:15:12.510299   11443 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:15:12.558650   11443 start.go:159] libmachine.API.Create for "default-k8s-diff-port-636000" (driver="qemu2")
	I0717 11:15:12.558709   11443 client.go:168] LocalClient.Create starting
	I0717 11:15:12.558805   11443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:15:12.558848   11443 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:12.558865   11443 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:12.558923   11443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:15:12.558953   11443 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:12.558963   11443 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:12.559461   11443 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:15:12.712571   11443 main.go:141] libmachine: Creating SSH key...
	I0717 11:15:12.932966   11443 main.go:141] libmachine: Creating Disk image...
	I0717 11:15:12.932974   11443 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:15:12.933163   11443 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2
	I0717 11:15:12.942296   11443 main.go:141] libmachine: STDOUT: 
	I0717 11:15:12.942318   11443 main.go:141] libmachine: STDERR: 
	I0717 11:15:12.942371   11443 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2 +20000M
	I0717 11:15:12.953698   11443 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:15:12.953709   11443 main.go:141] libmachine: STDERR: 
	I0717 11:15:12.953725   11443 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2
	I0717 11:15:12.953729   11443 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:15:12.953738   11443 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:12.953763   11443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:2b:8f:df:60:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2
	I0717 11:15:12.955353   11443 main.go:141] libmachine: STDOUT: 
	I0717 11:15:12.955368   11443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:12.955388   11443 client.go:171] duration metric: took 396.676917ms to LocalClient.Create
	I0717 11:15:14.955451   11443 start.go:128] duration metric: took 2.457324167s to createHost
	I0717 11:15:14.955474   11443 start.go:83] releasing machines lock for "default-k8s-diff-port-636000", held for 2.45777225s
	W0717 11:15:14.955528   11443 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:14.963637   11443 out.go:177] 
	W0717 11:15:14.967697   11443 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:14.967705   11443 out.go:239] * 
	* 
	W0717 11:15:14.968139   11443 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:15:14.977692   11443 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-636000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (34.178375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-016000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-016000 create -f testdata/busybox.yaml: exit status 1 (31.073167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-016000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-016000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (33.044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (33.212875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-016000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-016000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-016000 describe deploy/metrics-server -n kube-system: exit status 1 (27.381833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-016000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-016000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (28.779208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-016000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-016000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.195067584s)

                                                
                                                
-- stdout --
	* [embed-certs-016000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-016000" primary control-plane node in "embed-certs-016000" cluster
	* Restarting existing qemu2 VM for "embed-certs-016000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-016000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:14.982041   11494 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:14.985733   11494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:14.985744   11494 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:14.985747   11494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:14.985886   11494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:14.986903   11494 out.go:298] Setting JSON to false
	I0717 11:15:15.004768   11494 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6282,"bootTime":1721233832,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:15:15.004874   11494 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:15:15.009689   11494 out.go:177] * [embed-certs-016000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:15:15.012748   11494 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:15:15.012763   11494 notify.go:220] Checking for updates...
	I0717 11:15:15.021704   11494 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:15:15.028649   11494 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:15:15.031668   11494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:15:15.034744   11494 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:15:15.037648   11494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:15:15.040980   11494 config.go:182] Loaded profile config "embed-certs-016000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:15.041227   11494 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:15:15.045694   11494 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:15:15.052717   11494 start.go:297] selected driver: qemu2
	I0717 11:15:15.052728   11494 start.go:901] validating driver "qemu2" against &{Name:embed-certs-016000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:embed-certs-016000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:15.052803   11494 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:15:15.055119   11494 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:15:15.055144   11494 cni.go:84] Creating CNI manager for ""
	I0717 11:15:15.055152   11494 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:15:15.055171   11494 start.go:340] cluster config:
	{Name:embed-certs-016000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-016000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:15.058645   11494 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:15:15.066679   11494 out.go:177] * Starting "embed-certs-016000" primary control-plane node in "embed-certs-016000" cluster
	I0717 11:15:15.070670   11494 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:15:15.070699   11494 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:15:15.070711   11494 cache.go:56] Caching tarball of preloaded images
	I0717 11:15:15.070780   11494 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:15:15.070790   11494 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:15:15.070846   11494 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/embed-certs-016000/config.json ...
	I0717 11:15:15.071262   11494 start.go:360] acquireMachinesLock for embed-certs-016000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:15.071294   11494 start.go:364] duration metric: took 23.291µs to acquireMachinesLock for "embed-certs-016000"
	I0717 11:15:15.071303   11494 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:15:15.071309   11494 fix.go:54] fixHost starting: 
	I0717 11:15:15.071429   11494 fix.go:112] recreateIfNeeded on embed-certs-016000: state=Stopped err=<nil>
	W0717 11:15:15.071437   11494 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:15:15.075653   11494 out.go:177] * Restarting existing qemu2 VM for "embed-certs-016000" ...
	I0717 11:15:15.085704   11494 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:15.085768   11494 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:13:1c:9c:05:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2
	I0717 11:15:15.087782   11494 main.go:141] libmachine: STDOUT: 
	I0717 11:15:15.087802   11494 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:15.087830   11494 fix.go:56] duration metric: took 16.519667ms for fixHost
	I0717 11:15:15.087836   11494 start.go:83] releasing machines lock for "embed-certs-016000", held for 16.537958ms
	W0717 11:15:15.087843   11494 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:15.087884   11494 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:15.087889   11494 start.go:729] Will try again in 5 seconds ...
	I0717 11:15:20.090122   11494 start.go:360] acquireMachinesLock for embed-certs-016000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:20.090578   11494 start.go:364] duration metric: took 297.875µs to acquireMachinesLock for "embed-certs-016000"
	I0717 11:15:20.090720   11494 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:15:20.090740   11494 fix.go:54] fixHost starting: 
	I0717 11:15:20.091416   11494 fix.go:112] recreateIfNeeded on embed-certs-016000: state=Stopped err=<nil>
	W0717 11:15:20.091440   11494 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:15:20.101026   11494 out.go:177] * Restarting existing qemu2 VM for "embed-certs-016000" ...
	I0717 11:15:20.105053   11494 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:20.105280   11494 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:13:1c:9c:05:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/embed-certs-016000/disk.qcow2
	I0717 11:15:20.114635   11494 main.go:141] libmachine: STDOUT: 
	I0717 11:15:20.114723   11494 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:20.114835   11494 fix.go:56] duration metric: took 24.095292ms for fixHost
	I0717 11:15:20.114862   11494 start.go:83] releasing machines lock for "embed-certs-016000", held for 24.259167ms
	W0717 11:15:20.115093   11494 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-016000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-016000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:20.121034   11494 out.go:177] 
	W0717 11:15:20.125120   11494 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:20.125147   11494 out.go:239] * 
	* 
	W0717 11:15:20.127632   11494 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:15:20.135014   11494 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-016000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (67.291625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-636000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-636000 create -f testdata/busybox.yaml: exit status 1 (26.782333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-636000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-636000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (36.880333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (28.059417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-636000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-636000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-636000 describe deploy/metrics-server -n kube-system: exit status 1 (26.948833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-636000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-636000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (28.870292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-636000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-636000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2: exit status 80 (5.189487667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-636000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-636000" primary control-plane node in "default-k8s-diff-port-636000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-636000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-636000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:18.867856   11535 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:18.867971   11535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:18.867974   11535 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:18.867976   11535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:18.868121   11535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:18.869149   11535 out.go:298] Setting JSON to false
	I0717 11:15:18.884991   11535 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6286,"bootTime":1721233832,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:15:18.885066   11535 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:15:18.889668   11535 out.go:177] * [default-k8s-diff-port-636000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:15:18.896577   11535 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:15:18.896612   11535 notify.go:220] Checking for updates...
	I0717 11:15:18.903515   11535 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:15:18.906560   11535 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:15:18.909583   11535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:15:18.912562   11535 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:15:18.915608   11535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:15:18.918825   11535 config.go:182] Loaded profile config "default-k8s-diff-port-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:18.919095   11535 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:15:18.922557   11535 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:15:18.929596   11535 start.go:297] selected driver: qemu2
	I0717 11:15:18.929603   11535 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:18.929659   11535 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:15:18.931828   11535 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 11:15:18.931874   11535 cni.go:84] Creating CNI manager for ""
	I0717 11:15:18.931880   11535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:15:18.931906   11535 start.go:340] cluster config:
	{Name:default-k8s-diff-port-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-636000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:18.935331   11535 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:15:18.943495   11535 out.go:177] * Starting "default-k8s-diff-port-636000" primary control-plane node in "default-k8s-diff-port-636000" cluster
	I0717 11:15:18.946554   11535 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 11:15:18.946569   11535 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 11:15:18.946588   11535 cache.go:56] Caching tarball of preloaded images
	I0717 11:15:18.946659   11535 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:15:18.946672   11535 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 11:15:18.946734   11535 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/default-k8s-diff-port-636000/config.json ...
	I0717 11:15:18.947204   11535 start.go:360] acquireMachinesLock for default-k8s-diff-port-636000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:18.947233   11535 start.go:364] duration metric: took 23.459µs to acquireMachinesLock for "default-k8s-diff-port-636000"
	I0717 11:15:18.947242   11535 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:15:18.947247   11535 fix.go:54] fixHost starting: 
	I0717 11:15:18.947374   11535 fix.go:112] recreateIfNeeded on default-k8s-diff-port-636000: state=Stopped err=<nil>
	W0717 11:15:18.947383   11535 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:15:18.951576   11535 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-636000" ...
	I0717 11:15:18.961607   11535 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:18.961646   11535 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:2b:8f:df:60:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2
	I0717 11:15:18.963701   11535 main.go:141] libmachine: STDOUT: 
	I0717 11:15:18.963742   11535 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:18.963769   11535 fix.go:56] duration metric: took 16.51725ms for fixHost
	I0717 11:15:18.963773   11535 start.go:83] releasing machines lock for "default-k8s-diff-port-636000", held for 16.536208ms
	W0717 11:15:18.963781   11535 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:18.963812   11535 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:18.963817   11535 start.go:729] Will try again in 5 seconds ...
	I0717 11:15:23.965946   11535 start.go:360] acquireMachinesLock for default-k8s-diff-port-636000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:23.966414   11535 start.go:364] duration metric: took 363.625µs to acquireMachinesLock for "default-k8s-diff-port-636000"
	I0717 11:15:23.966539   11535 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:15:23.966564   11535 fix.go:54] fixHost starting: 
	I0717 11:15:23.967327   11535 fix.go:112] recreateIfNeeded on default-k8s-diff-port-636000: state=Stopped err=<nil>
	W0717 11:15:23.967352   11535 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:15:23.981875   11535 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-636000" ...
	I0717 11:15:23.984878   11535 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:23.985091   11535 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:2b:8f:df:60:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/default-k8s-diff-port-636000/disk.qcow2
	I0717 11:15:23.994671   11535 main.go:141] libmachine: STDOUT: 
	I0717 11:15:23.994735   11535 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:23.994834   11535 fix.go:56] duration metric: took 28.277125ms for fixHost
	I0717 11:15:23.994850   11535 start.go:83] releasing machines lock for "default-k8s-diff-port-636000", held for 28.414ms
	W0717 11:15:23.995014   11535 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-636000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-636000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:24.003684   11535 out.go:177] 
	W0717 11:15:24.006754   11535 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:24.006778   11535 out.go:239] * 
	* 
	W0717 11:15:24.009298   11535 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:15:24.016696   11535 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-636000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (67.15275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-016000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (31.333083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-016000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-016000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-016000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.066084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-016000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-016000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (28.543166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-016000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (27.9905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-016000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-016000 --alsologtostderr -v=1: exit status 83 (35.593875ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-016000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-016000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:20.398165   11557 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:20.398317   11557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:20.398322   11557 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:20.398324   11557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:20.398453   11557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:20.398667   11557 out.go:298] Setting JSON to false
	I0717 11:15:20.398673   11557 mustload.go:65] Loading cluster: embed-certs-016000
	I0717 11:15:20.398866   11557 config.go:182] Loaded profile config "embed-certs-016000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:20.400686   11557 out.go:177] * The control-plane node embed-certs-016000 host is not running: state=Stopped
	I0717 11:15:20.404334   11557 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-016000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-016000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (28.081791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (28.015042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-016000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-412000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-412000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.794106375s)

                                                
                                                
-- stdout --
	* [newest-cni-412000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-412000" primary control-plane node in "newest-cni-412000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-412000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:20.699125   11575 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:20.699240   11575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:20.699243   11575 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:20.699245   11575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:20.699364   11575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:20.700465   11575 out.go:298] Setting JSON to false
	I0717 11:15:20.716229   11575 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6288,"bootTime":1721233832,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:15:20.716304   11575 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:15:20.721340   11575 out.go:177] * [newest-cni-412000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:15:20.728176   11575 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:15:20.728252   11575 notify.go:220] Checking for updates...
	I0717 11:15:20.735353   11575 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:15:20.736810   11575 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:15:20.740324   11575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:15:20.743302   11575 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:15:20.746346   11575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:15:20.749627   11575 config.go:182] Loaded profile config "default-k8s-diff-port-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:20.749692   11575 config.go:182] Loaded profile config "multinode-934000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:20.749743   11575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:15:20.754274   11575 out.go:177] * Using the qemu2 driver based on user configuration
	I0717 11:15:20.761343   11575 start.go:297] selected driver: qemu2
	I0717 11:15:20.761349   11575 start.go:901] validating driver "qemu2" against <nil>
	I0717 11:15:20.761357   11575 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:15:20.763494   11575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0717 11:15:20.763515   11575 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0717 11:15:20.771291   11575 out.go:177] * Automatically selected the socket_vmnet network
	I0717 11:15:20.774472   11575 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 11:15:20.774489   11575 cni.go:84] Creating CNI manager for ""
	I0717 11:15:20.774506   11575 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:15:20.774514   11575 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 11:15:20.774544   11575 start.go:340] cluster config:
	{Name:newest-cni-412000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-412000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:20.778210   11575 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:15:20.787275   11575 out.go:177] * Starting "newest-cni-412000" primary control-plane node in "newest-cni-412000" cluster
	I0717 11:15:20.791357   11575 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 11:15:20.791379   11575 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0717 11:15:20.791396   11575 cache.go:56] Caching tarball of preloaded images
	I0717 11:15:20.791463   11575 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:15:20.791469   11575 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0717 11:15:20.791537   11575 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/newest-cni-412000/config.json ...
	I0717 11:15:20.791559   11575 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/newest-cni-412000/config.json: {Name:mk9f9946a73b4e7568f35af94808d46878c48cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 11:15:20.791794   11575 start.go:360] acquireMachinesLock for newest-cni-412000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:20.791829   11575 start.go:364] duration metric: took 29.916µs to acquireMachinesLock for "newest-cni-412000"
	I0717 11:15:20.791841   11575 start.go:93] Provisioning new machine with config: &{Name:newest-cni-412000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-412000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:15:20.791875   11575 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:15:20.800319   11575 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:15:20.818523   11575 start.go:159] libmachine.API.Create for "newest-cni-412000" (driver="qemu2")
	I0717 11:15:20.818556   11575 client.go:168] LocalClient.Create starting
	I0717 11:15:20.818620   11575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:15:20.818659   11575 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:20.818672   11575 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:20.818712   11575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:15:20.818736   11575 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:20.818742   11575 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:20.819118   11575 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:15:20.952426   11575 main.go:141] libmachine: Creating SSH key...
	I0717 11:15:21.104908   11575 main.go:141] libmachine: Creating Disk image...
	I0717 11:15:21.104914   11575 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:15:21.105091   11575 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2
	I0717 11:15:21.114455   11575 main.go:141] libmachine: STDOUT: 
	I0717 11:15:21.114473   11575 main.go:141] libmachine: STDERR: 
	I0717 11:15:21.114512   11575 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2 +20000M
	I0717 11:15:21.122433   11575 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:15:21.122445   11575 main.go:141] libmachine: STDERR: 
	I0717 11:15:21.122455   11575 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2
	I0717 11:15:21.122464   11575 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:15:21.122475   11575 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:21.122500   11575 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:81:e5:d8:63:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2
	I0717 11:15:21.124054   11575 main.go:141] libmachine: STDOUT: 
	I0717 11:15:21.124068   11575 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:21.124085   11575 client.go:171] duration metric: took 305.527459ms to LocalClient.Create
	I0717 11:15:23.126248   11575 start.go:128] duration metric: took 2.334367833s to createHost
	I0717 11:15:23.126302   11575 start.go:83] releasing machines lock for "newest-cni-412000", held for 2.334478667s
	W0717 11:15:23.126368   11575 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:23.142580   11575 out.go:177] * Deleting "newest-cni-412000" in qemu2 ...
	W0717 11:15:23.166726   11575 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:23.166762   11575 start.go:729] Will try again in 5 seconds ...
	I0717 11:15:28.168974   11575 start.go:360] acquireMachinesLock for newest-cni-412000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:28.169513   11575 start.go:364] duration metric: took 431µs to acquireMachinesLock for "newest-cni-412000"
	I0717 11:15:28.169685   11575 start.go:93] Provisioning new machine with config: &{Name:newest-cni-412000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-412000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 11:15:28.169952   11575 start.go:125] createHost starting for "" (driver="qemu2")
	I0717 11:15:28.174800   11575 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 11:15:28.224665   11575 start.go:159] libmachine.API.Create for "newest-cni-412000" (driver="qemu2")
	I0717 11:15:28.224710   11575 client.go:168] LocalClient.Create starting
	I0717 11:15:28.224828   11575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/ca.pem
	I0717 11:15:28.224897   11575 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:28.224913   11575 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:28.224991   11575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19283-6848/.minikube/certs/cert.pem
	I0717 11:15:28.225035   11575 main.go:141] libmachine: Decoding PEM data...
	I0717 11:15:28.225049   11575 main.go:141] libmachine: Parsing certificate...
	I0717 11:15:28.225741   11575 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso...
	I0717 11:15:28.368743   11575 main.go:141] libmachine: Creating SSH key...
	I0717 11:15:28.406151   11575 main.go:141] libmachine: Creating Disk image...
	I0717 11:15:28.406156   11575 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0717 11:15:28.406308   11575 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2.raw /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2
	I0717 11:15:28.415383   11575 main.go:141] libmachine: STDOUT: 
	I0717 11:15:28.415402   11575 main.go:141] libmachine: STDERR: 
	I0717 11:15:28.415449   11575 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2 +20000M
	I0717 11:15:28.423200   11575 main.go:141] libmachine: STDOUT: Image resized.
	
	I0717 11:15:28.423217   11575 main.go:141] libmachine: STDERR: 
	I0717 11:15:28.423231   11575 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2
	I0717 11:15:28.423235   11575 main.go:141] libmachine: Starting QEMU VM...
	I0717 11:15:28.423259   11575 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:28.423292   11575 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:f5:90:8d:3c:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2
	I0717 11:15:28.424861   11575 main.go:141] libmachine: STDOUT: 
	I0717 11:15:28.424879   11575 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:28.424900   11575 client.go:171] duration metric: took 200.187125ms to LocalClient.Create
	I0717 11:15:30.427071   11575 start.go:128] duration metric: took 2.25710675s to createHost
	I0717 11:15:30.427119   11575 start.go:83] releasing machines lock for "newest-cni-412000", held for 2.25759575s
	W0717 11:15:30.427463   11575 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-412000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-412000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:30.437166   11575 out.go:177] 
	W0717 11:15:30.443241   11575 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:30.443265   11575 out.go:239] * 
	* 
	W0717 11:15:30.445578   11575 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:15:30.458117   11575 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-412000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000: exit status 7 (71.068292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-412000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-636000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (30.891917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-636000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-636000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-636000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.001292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-636000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-636000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (28.452375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-636000 image list --format=json
start_stop_delete_test.go:304: v1.30.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.2",
- 	"registry.k8s.io/kube-controller-manager:v1.30.2",
- 	"registry.k8s.io/kube-proxy:v1.30.2",
- 	"registry.k8s.io/kube-scheduler:v1.30.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (29.305708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-636000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-636000 --alsologtostderr -v=1: exit status 83 (41.265959ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-636000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-636000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:24.281264   11597 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:24.281422   11597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:24.281425   11597 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:24.281427   11597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:24.281551   11597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:24.281759   11597 out.go:298] Setting JSON to false
	I0717 11:15:24.281766   11597 mustload.go:65] Loading cluster: default-k8s-diff-port-636000
	I0717 11:15:24.281970   11597 config.go:182] Loaded profile config "default-k8s-diff-port-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 11:15:24.286573   11597 out.go:177] * The control-plane node default-k8s-diff-port-636000 host is not running: state=Stopped
	I0717 11:15:24.290724   11597 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-636000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-636000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (28.209917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (28.514667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-412000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-412000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.180550167s)

                                                
                                                
-- stdout --
	* [newest-cni-412000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-412000" primary control-plane node in "newest-cni-412000" cluster
	* Restarting existing qemu2 VM for "newest-cni-412000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-412000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:32.743029   11647 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:32.743144   11647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:32.743148   11647 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:32.743150   11647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:32.743267   11647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:32.744244   11647 out.go:298] Setting JSON to false
	I0717 11:15:32.760263   11647 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6300,"bootTime":1721233832,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 11:15:32.760339   11647 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 11:15:32.765541   11647 out.go:177] * [newest-cni-412000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 11:15:32.772566   11647 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 11:15:32.772615   11647 notify.go:220] Checking for updates...
	I0717 11:15:32.779589   11647 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 11:15:32.782582   11647 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 11:15:32.785550   11647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 11:15:32.788545   11647 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 11:15:32.791621   11647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 11:15:32.794746   11647 config.go:182] Loaded profile config "newest-cni-412000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0717 11:15:32.795025   11647 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 11:15:32.798523   11647 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 11:15:32.804462   11647 start.go:297] selected driver: qemu2
	I0717 11:15:32.804469   11647 start.go:901] validating driver "qemu2" against &{Name:newest-cni-412000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-412000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:32.804539   11647 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 11:15:32.806899   11647 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 11:15:32.806940   11647 cni.go:84] Creating CNI manager for ""
	I0717 11:15:32.806949   11647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 11:15:32.806991   11647 start.go:340] cluster config:
	{Name:newest-cni-412000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-412000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 11:15:32.810439   11647 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 11:15:32.818546   11647 out.go:177] * Starting "newest-cni-412000" primary control-plane node in "newest-cni-412000" cluster
	I0717 11:15:32.821573   11647 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 11:15:32.821588   11647 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0717 11:15:32.821599   11647 cache.go:56] Caching tarball of preloaded images
	I0717 11:15:32.821659   11647 preload.go:172] Found /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 11:15:32.821664   11647 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0717 11:15:32.821736   11647 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/newest-cni-412000/config.json ...
	I0717 11:15:32.822158   11647 start.go:360] acquireMachinesLock for newest-cni-412000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:32.822185   11647 start.go:364] duration metric: took 21.167µs to acquireMachinesLock for "newest-cni-412000"
	I0717 11:15:32.822193   11647 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:15:32.822199   11647 fix.go:54] fixHost starting: 
	I0717 11:15:32.822315   11647 fix.go:112] recreateIfNeeded on newest-cni-412000: state=Stopped err=<nil>
	W0717 11:15:32.822324   11647 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:15:32.824076   11647 out.go:177] * Restarting existing qemu2 VM for "newest-cni-412000" ...
	I0717 11:15:32.832535   11647 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:32.832577   11647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:f5:90:8d:3c:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2
	I0717 11:15:32.834403   11647 main.go:141] libmachine: STDOUT: 
	I0717 11:15:32.834443   11647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:32.834469   11647 fix.go:56] duration metric: took 12.269583ms for fixHost
	I0717 11:15:32.834472   11647 start.go:83] releasing machines lock for "newest-cni-412000", held for 12.283042ms
	W0717 11:15:32.834478   11647 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:32.834502   11647 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:32.834507   11647 start.go:729] Will try again in 5 seconds ...
	I0717 11:15:37.836644   11647 start.go:360] acquireMachinesLock for newest-cni-412000: {Name:mkbb95b3f8e9962ecbbea2f57fb6c4d28aaba22d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 11:15:37.837097   11647 start.go:364] duration metric: took 343.583µs to acquireMachinesLock for "newest-cni-412000"
	I0717 11:15:37.837217   11647 start.go:96] Skipping create...Using existing machine configuration
	I0717 11:15:37.837238   11647 fix.go:54] fixHost starting: 
	I0717 11:15:37.837968   11647 fix.go:112] recreateIfNeeded on newest-cni-412000: state=Stopped err=<nil>
	W0717 11:15:37.837995   11647 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 11:15:37.847254   11647 out.go:177] * Restarting existing qemu2 VM for "newest-cni-412000" ...
	I0717 11:15:37.851389   11647 qemu.go:418] Using hvf for hardware acceleration
	I0717 11:15:37.851640   11647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:f5:90:8d:3c:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19283-6848/.minikube/machines/newest-cni-412000/disk.qcow2
	I0717 11:15:37.860525   11647 main.go:141] libmachine: STDOUT: 
	I0717 11:15:37.860599   11647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0717 11:15:37.860677   11647 fix.go:56] duration metric: took 23.436083ms for fixHost
	I0717 11:15:37.860693   11647 start.go:83] releasing machines lock for "newest-cni-412000", held for 23.573833ms
	W0717 11:15:37.860886   11647 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-412000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-412000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0717 11:15:37.869394   11647 out.go:177] 
	W0717 11:15:37.873489   11647 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0717 11:15:37.873517   11647 out.go:239] * 
	* 
	W0717 11:15:37.876062   11647 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 11:15:37.883446   11647 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-412000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000: exit status 7 (68.051667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-412000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-412000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000: exit status 7 (29.668584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-412000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-412000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-412000 --alsologtostderr -v=1: exit status 83 (43.927209ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-412000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-412000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 11:15:38.064444   11663 out.go:291] Setting OutFile to fd 1 ...
	I0717 11:15:38.064587   11663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:38.064591   11663 out.go:304] Setting ErrFile to fd 2...
	I0717 11:15:38.064593   11663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 11:15:38.064717   11663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 11:15:38.064911   11663 out.go:298] Setting JSON to false
	I0717 11:15:38.064918   11663 mustload.go:65] Loading cluster: newest-cni-412000
	I0717 11:15:38.065110   11663 config.go:182] Loaded profile config "newest-cni-412000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0717 11:15:38.069654   11663 out.go:177] * The control-plane node newest-cni-412000 host is not running: state=Stopped
	I0717 11:15:38.077590   11663 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-412000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-412000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000: exit status 7 (30.045584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-412000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000: exit status 7 (29.096833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-412000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.2/json-events 6.63
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.08
18 TestDownloadOnly/v1.30.2/DeleteAll 0.1
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.29
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.48
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.11
52 TestErrorSpam/stop 9.72
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.75
64 TestFunctional/serial/CacheCmd/cache/add_local 1.1
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.24
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.2
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
116 TestFunctional/parallel/ProfileCmd/profile_list 0.08
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
121 TestFunctional/parallel/Version/short 0.04
128 TestFunctional/parallel/ImageCommands/Setup 1.89
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 1.88
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.19
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 1.1
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.3
267 TestNoKubernetes/serial/Stop 3.28
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
284 TestStartStop/group/old-k8s-version/serial/Stop 3.57
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 3.25
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.1
308 TestStartStop/group/embed-certs/serial/Stop 2.02
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.5
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 1.99
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-580000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-580000: exit status 85 (95.317334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |          |
	|         | -p download-only-580000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:49:03
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:49:03.610039    7338 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:49:03.610193    7338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:03.610197    7338 out.go:304] Setting ErrFile to fd 2...
	I0717 10:49:03.610199    7338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:03.610315    7338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	W0717 10:49:03.610400    7338 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19283-6848/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19283-6848/.minikube/config/config.json: no such file or directory
	I0717 10:49:03.611657    7338 out.go:298] Setting JSON to true
	I0717 10:49:03.629060    7338 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4711,"bootTime":1721233832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:49:03.629135    7338 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:49:03.634767    7338 out.go:97] [download-only-580000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:49:03.634877    7338 notify.go:220] Checking for updates...
	W0717 10:49:03.634945    7338 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 10:49:03.638940    7338 out.go:169] MINIKUBE_LOCATION=19283
	I0717 10:49:03.647697    7338 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:49:03.661119    7338 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:49:03.663419    7338 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:49:03.666622    7338 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	W0717 10:49:03.672314    7338 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:49:03.672528    7338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:49:03.675610    7338 out.go:97] Using the qemu2 driver based on user configuration
	I0717 10:49:03.675629    7338 start.go:297] selected driver: qemu2
	I0717 10:49:03.675644    7338 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:49:03.675714    7338 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:49:03.679401    7338 out.go:169] Automatically selected the socket_vmnet network
	I0717 10:49:03.685403    7338 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0717 10:49:03.685493    7338 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:49:03.685570    7338 cni.go:84] Creating CNI manager for ""
	I0717 10:49:03.685589    7338 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 10:49:03.685649    7338 start.go:340] cluster config:
	{Name:download-only-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:49:03.689709    7338 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:49:03.694654    7338 out.go:97] Downloading VM boot image ...
	I0717 10:49:03.694671    7338 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/iso/arm64/minikube-v1.33.1-1721146474-19264-arm64.iso
	I0717 10:49:08.406985    7338 out.go:97] Starting "download-only-580000" primary control-plane node in "download-only-580000" cluster
	I0717 10:49:08.407019    7338 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:49:08.463029    7338 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 10:49:08.463054    7338 cache.go:56] Caching tarball of preloaded images
	I0717 10:49:08.463846    7338 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:49:08.468149    7338 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 10:49:08.468156    7338 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:08.547375    7338 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0717 10:49:13.645458    7338 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:13.645639    7338 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:14.341379    7338 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 10:49:14.341575    7338 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/download-only-580000/config.json ...
	I0717 10:49:14.341606    7338 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/download-only-580000/config.json: {Name:mk98ed7f00ff76b7ae93d12fd946317f6e852e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:49:14.342755    7338 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 10:49:14.342947    7338 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0717 10:49:14.707705    7338 out.go:169] 
	W0717 10:49:14.714813    7338 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19283-6848/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60 0x109399a60] Decompressors:map[bz2:0x1400080d030 gz:0x1400080d038 tar:0x1400080cfe0 tar.bz2:0x1400080cff0 tar.gz:0x1400080d000 tar.xz:0x1400080d010 tar.zst:0x1400080d020 tbz2:0x1400080cff0 tgz:0x1400080d000 txz:0x1400080d010 tzst:0x1400080d020 xz:0x1400080d040 zip:0x1400080d050 zst:0x1400080d048] Getters:map[file:0x14000886e10 http:0x140005d6190 https:0x140005d61e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0717 10:49:14.714837    7338 out_reason.go:110] 
	W0717 10:49:14.722704    7338 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 10:49:14.726614    7338 out.go:169] 
	
	
	* The control-plane node download-only-580000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-580000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-580000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (6.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-478000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-478000 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=qemu2 : (6.632135459s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (6.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-478000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-478000: exit status 85 (79.826042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | -p download-only-580000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-580000        | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| start   | -o=json --download-only        | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | -p download-only-478000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:49:15
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:49:15.138358    7371 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:49:15.138489    7371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:15.138492    7371 out.go:304] Setting ErrFile to fd 2...
	I0717 10:49:15.138495    7371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:15.138603    7371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:49:15.139610    7371 out.go:298] Setting JSON to true
	I0717 10:49:15.155479    7371 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4723,"bootTime":1721233832,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:49:15.155567    7371 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:49:15.160684    7371 out.go:97] [download-only-478000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:49:15.160765    7371 notify.go:220] Checking for updates...
	I0717 10:49:15.164582    7371 out.go:169] MINIKUBE_LOCATION=19283
	I0717 10:49:15.167681    7371 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:49:15.171509    7371 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:49:15.174627    7371 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:49:15.177668    7371 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	W0717 10:49:15.183540    7371 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:49:15.183686    7371 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:49:15.186568    7371 out.go:97] Using the qemu2 driver based on user configuration
	I0717 10:49:15.186576    7371 start.go:297] selected driver: qemu2
	I0717 10:49:15.186580    7371 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:49:15.186628    7371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:49:15.189687    7371 out.go:169] Automatically selected the socket_vmnet network
	I0717 10:49:15.194634    7371 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0717 10:49:15.194725    7371 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:49:15.194742    7371 cni.go:84] Creating CNI manager for ""
	I0717 10:49:15.194756    7371 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:49:15.194760    7371 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 10:49:15.194804    7371 start.go:340] cluster config:
	{Name:download-only-478000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-478000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:49:15.198142    7371 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:49:15.201678    7371 out.go:97] Starting "download-only-478000" primary control-plane node in "download-only-478000" cluster
	I0717 10:49:15.201686    7371 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:49:15.262891    7371 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:49:15.262908    7371 cache.go:56] Caching tarball of preloaded images
	I0717 10:49:15.263062    7371 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 10:49:15.266290    7371 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 10:49:15.266301    7371 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:15.347094    7371 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4?checksum=md5:3bd37d965c85173ac77cdcc664938efd -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4
	I0717 10:49:19.850663    7371 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:19.850877    7371 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-478000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-478000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-478000
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-012000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-012000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (6.285901625s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-012000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-012000: exit status 85 (79.185959ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | -p download-only-580000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-580000             | download-only-580000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| start   | -o=json --download-only             | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | -p download-only-478000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| delete  | -p download-only-478000             | download-only-478000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT | 17 Jul 24 10:49 PDT |
	| start   | -o=json --download-only             | download-only-012000 | jenkins | v1.33.1 | 17 Jul 24 10:49 PDT |                     |
	|         | -p download-only-012000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 10:49:22
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 10:49:22.054488    7393 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:49:22.054622    7393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:22.054625    7393 out.go:304] Setting ErrFile to fd 2...
	I0717 10:49:22.054628    7393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:49:22.054758    7393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:49:22.055814    7393 out.go:298] Setting JSON to true
	I0717 10:49:22.071631    7393 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4730,"bootTime":1721233832,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:49:22.071693    7393 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:49:22.075327    7393 out.go:97] [download-only-012000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:49:22.075428    7393 notify.go:220] Checking for updates...
	I0717 10:49:22.079249    7393 out.go:169] MINIKUBE_LOCATION=19283
	I0717 10:49:22.083246    7393 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:49:22.087251    7393 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:49:22.088643    7393 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:49:22.092215    7393 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	W0717 10:49:22.098253    7393 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 10:49:22.098427    7393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:49:22.101140    7393 out.go:97] Using the qemu2 driver based on user configuration
	I0717 10:49:22.101149    7393 start.go:297] selected driver: qemu2
	I0717 10:49:22.101153    7393 start.go:901] validating driver "qemu2" against <nil>
	I0717 10:49:22.101213    7393 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 10:49:22.104215    7393 out.go:169] Automatically selected the socket_vmnet network
	I0717 10:49:22.109275    7393 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0717 10:49:22.109365    7393 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 10:49:22.109402    7393 cni.go:84] Creating CNI manager for ""
	I0717 10:49:22.109410    7393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 10:49:22.109416    7393 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 10:49:22.109461    7393 start.go:340] cluster config:
	{Name:download-only-012000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-012000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:49:22.112891    7393 iso.go:125] acquiring lock: {Name:mkd4595d0afd677e8180511408b4c337c4176ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 10:49:22.116133    7393 out.go:97] Starting "download-only-012000" primary control-plane node in "download-only-012000" cluster
	I0717 10:49:22.116140    7393 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 10:49:22.171937    7393 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0717 10:49:22.171975    7393 cache.go:56] Caching tarball of preloaded images
	I0717 10:49:22.172161    7393 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 10:49:22.175510    7393 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 10:49:22.175517    7393 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:22.254155    7393 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0717 10:49:26.298552    7393 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:26.298730    7393 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0717 10:49:26.818256    7393 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0717 10:49:26.818455    7393 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/download-only-012000/config.json ...
	I0717 10:49:26.818478    7393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-6848/.minikube/profiles/download-only-012000/config.json: {Name:mkc906f625b1edbd451f33896fc50317836bc23b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 10:49:26.819608    7393 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 10:49:26.819737    7393 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19283-6848/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-012000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-012000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-012000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-738000 --alsologtostderr --binary-mirror http://127.0.0.1:51055 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-738000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-738000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-914000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-914000: exit status 85 (55.596334ms)

                                                
                                                
-- stdout --
	* Profile "addons-914000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-914000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-914000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-914000: exit status 85 (59.508458ms)

                                                
                                                
-- stdout --
	* Profile "addons-914000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-914000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.48s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status: exit status 7 (31.024875ms)

                                                
                                                
-- stdout --
	nospam-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status: exit status 7 (29.839292ms)

                                                
                                                
-- stdout --
	nospam-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status: exit status 7 (29.604417ms)

                                                
                                                
-- stdout --
	nospam-854000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause: exit status 83 (37.774292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-854000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause: exit status 83 (38.838916ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-854000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause: exit status 83 (42.740166ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-854000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause: exit status 83 (37.825375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-854000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause: exit status 83 (36.437458ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-854000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause: exit status 83 (39.163042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-854000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.11s)

                                                
                                    
x
+
TestErrorSpam/stop (9.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 stop: (3.543137917s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 stop: (3.485132833s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-854000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-854000 stop: (2.685814709s)
--- PASS: TestErrorSpam/stop (9.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19283-6848/.minikube/files/etc/test/nested/copy/7336/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3456285486/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cache add minikube-local-cache-test:functional-928000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 cache delete minikube-local-cache-test:functional-928000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-928000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 config get cpus: exit status 14 (29.274208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 config get cpus: exit status 14 (40.387959ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-928000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-928000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.666042ms)

                                                
                                                
-- stdout --
	* [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:51:00.401218    7896 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:51:00.401336    7896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:00.401339    7896 out.go:304] Setting ErrFile to fd 2...
	I0717 10:51:00.401341    7896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:00.401493    7896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:51:00.402495    7896 out.go:298] Setting JSON to false
	I0717 10:51:00.418460    7896 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4828,"bootTime":1721233832,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:51:00.418534    7896 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:51:00.423457    7896 out.go:177] * [functional-928000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0717 10:51:00.431421    7896 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:51:00.431458    7896 notify.go:220] Checking for updates...
	I0717 10:51:00.437416    7896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:51:00.440387    7896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:51:00.441662    7896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:51:00.444389    7896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:51:00.447400    7896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:51:00.450640    7896 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:51:00.450904    7896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:51:00.455310    7896 out.go:177] * Using the qemu2 driver based on existing profile
	I0717 10:51:00.462402    7896 start.go:297] selected driver: qemu2
	I0717 10:51:00.462408    7896 start.go:901] validating driver "qemu2" against &{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:51:00.462468    7896 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:51:00.469530    7896 out.go:177] 
	W0717 10:51:00.473421    7896 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 10:51:00.480361    7896 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-928000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-928000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-928000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.003791ms)

                                                
                                                
-- stdout --
	* [functional-928000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 10:51:00.287095    7892 out.go:291] Setting OutFile to fd 1 ...
	I0717 10:51:00.287213    7892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:00.287216    7892 out.go:304] Setting ErrFile to fd 2...
	I0717 10:51:00.287219    7892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 10:51:00.287350    7892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-6848/.minikube/bin
	I0717 10:51:00.288824    7892 out.go:298] Setting JSON to false
	I0717 10:51:00.305282    7892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4828,"bootTime":1721233832,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0717 10:51:00.305369    7892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 10:51:00.310463    7892 out.go:177] * [functional-928000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0717 10:51:00.316401    7892 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 10:51:00.316441    7892 notify.go:220] Checking for updates...
	I0717 10:51:00.323408    7892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	I0717 10:51:00.326337    7892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0717 10:51:00.329384    7892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 10:51:00.332381    7892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	I0717 10:51:00.333613    7892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 10:51:00.336736    7892 config.go:182] Loaded profile config "functional-928000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 10:51:00.336992    7892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 10:51:00.344648    7892 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0717 10:51:00.349413    7892 start.go:297] selected driver: qemu2
	I0717 10:51:00.349418    7892 start.go:901] validating driver "qemu2" against &{Name:functional-928000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.2 ClusterName:functional-928000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 10:51:00.349487    7892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 10:51:00.356417    7892 out.go:177] 
	W0717 10:51:00.360380    7892 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 10:51:00.363404    7892 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "44.825875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "32.9585ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "45.793791ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "30.500416ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.861701459s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-928000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image rm docker.io/kicbase/echo-server:functional-928000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-928000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 image save --daemon docker.io/kicbase/echo-server:functional-928000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-928000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.0115615s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-928000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-928000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-928000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-928000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-968000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-968000 --output=json --user=testUser: (1.879891791s)
--- PASS: TestJSONOutput/stop/Command (1.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-173000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-173000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.940375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f288bce6-3d2f-4361-8618-b10d10cda8a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-173000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"978daaa1-6f40-42dc-8f35-bf148bd47b4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19283"}}
	{"specversion":"1.0","id":"b110e3da-43c9-4b25-aa1a-3e689b21b300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig"}}
	{"specversion":"1.0","id":"654e8cd6-98ce-42c8-9ecc-9dae06c9c098","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7d987910-1e8e-4173-a375-3ca6a80969c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"49ba5c69-2de4-4706-bbc1-ee8cef150576","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube"}}
	{"specversion":"1.0","id":"2802ac26-f81f-40e1-b024-639e2924d4a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dbd199e9-243d-4bc5-8f52-101b9b02d7a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-173000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-173000
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-337000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-337000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.716375ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-337000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19283
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19283-6848/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-6848/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-337000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-337000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.563791ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-337000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-337000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.596585333s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.701678042s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-337000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-337000: (3.277362917s)
--- PASS: TestNoKubernetes/serial/Stop (3.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-337000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-337000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.660709ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-337000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-337000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-018000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-981000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-981000 --alsologtostderr -v=3: (3.574403459s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-981000 -n old-k8s-version-981000: exit status 7 (57.386417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-981000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-112000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-112000 --alsologtostderr -v=3: (3.247456416s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-112000 -n no-preload-112000: exit status 7 (39.046625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-112000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-016000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-016000 --alsologtostderr -v=3: (2.024250333s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-016000 -n embed-certs-016000: exit status 7 (55.2685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-016000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-636000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-636000 --alsologtostderr -v=3: (3.495820666s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-636000 -n default-k8s-diff-port-636000: exit status 7 (56.765792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-636000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-412000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-412000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-412000 --alsologtostderr -v=3: (1.989397042s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-412000 -n newest-cni-412000: exit status 7 (57.948042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-412000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1074272737/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721238627020932000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1074272737/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721238627020932000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1074272737/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721238627020932000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1074272737/001/test-1721238627020932000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (50.210917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.111917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.791916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.790292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.526542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.053084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.697209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo umount -f /mount-9p": exit status 83 (46.804709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-928000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1074272737/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (8.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1870680923/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.325958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.557625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.195792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.165084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.329666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.703167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.389791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "sudo umount -f /mount-9p": exit status 83 (41.911125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-928000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1870680923/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2623134466/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2623134466/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2623134466/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1: exit status 83 (90.4755ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1: exit status 83 (82.913042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1: exit status 83 (84.447375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1: exit status 83 (84.199375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1: exit status 83 (84.461542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1: exit status 83 (84.145625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-928000 ssh "findmnt -T" /mount1: exit status 83 (85.3405ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-928000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-928000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2623134466/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2623134466/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-928000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2623134466/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.34s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-031000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-031000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-031000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-031000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-031000"

                                                
                                                
----------------------- debugLogs end: cilium-031000 [took: 2.182514084s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-031000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-031000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-010000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-010000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard