Test Report: QEMU_macOS 19052

                    
                      d48f9e84ed90f918e7d088c10bc117a5466d28f2:2024-06-10:34838
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.85
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.05
27 TestAddons/Setup 10.73
28 TestCertOptions 10.19
29 TestCertExpiration 197.6
30 TestDockerFlags 12.46
31 TestForceSystemdFlag 11.99
32 TestForceSystemdEnv 10.35
38 TestErrorSpam/setup 10.02
47 TestFunctional/serial/StartWithProxy 9.96
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.64
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.95
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.17
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 93.69
100 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
101 TestFunctional/parallel/ServiceCmd/List 0.04
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
104 TestFunctional/parallel/ServiceCmd/Format 0.05
105 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/Version/components 0.04
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
118 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.4
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.52
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
127 TestFunctional/parallel/DockerEnv/bash 0.05
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.8
141 TestMultiControlPlane/serial/StartCluster 10.12
142 TestMultiControlPlane/serial/DeployApp 91.34
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 36.84
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.24
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 2.22
156 TestMultiControlPlane/serial/RestartCluster 5.24
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 10
165 TestJSONOutput/start/Command 9.91
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.46
197 TestMountStart/serial/StartWithMountFirst 10.16
200 TestMultiNode/serial/FreshStart2Nodes 10.01
201 TestMultiNode/serial/DeployApp2Nodes 114.94
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 56.15
209 TestMultiNode/serial/RestartKeepsNodes 7.42
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.57
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 21.67
217 TestPreload 10.33
219 TestScheduledStopUnix 10.02
220 TestSkaffold 13.38
223 TestRunningBinaryUpgrade 636.99
225 TestKubernetesUpgrade 18.88
239 TestStoppedBinaryUpgrade/Upgrade 593.84
249 TestPause/serial/Start 9.81
252 TestNoKubernetes/serial/StartWithK8s 9.84
253 TestNoKubernetes/serial/StartWithStopK8s 7.68
254 TestNoKubernetes/serial/Start 7.65
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.46
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.68
260 TestNoKubernetes/serial/StartNoArgs 5.5
262 TestNetworkPlugins/group/auto/Start 9.83
263 TestNetworkPlugins/group/kindnet/Start 9.83
264 TestNetworkPlugins/group/flannel/Start 9.89
265 TestNetworkPlugins/group/enable-default-cni/Start 9.95
266 TestNetworkPlugins/group/bridge/Start 9.91
267 TestNetworkPlugins/group/kubenet/Start 9.87
268 TestNetworkPlugins/group/custom-flannel/Start 9.77
269 TestNetworkPlugins/group/calico/Start 10.02
270 TestNetworkPlugins/group/false/Start 9.78
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.98
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.9
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.23
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/no-preload/serial/Pause 0.1
294 TestStartStop/group/embed-certs/serial/FirstStart 9.89
295 TestStartStop/group/embed-certs/serial/DeployApp 0.09
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
299 TestStartStop/group/embed-certs/serial/SecondStart 5.21
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.04
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
305 TestStartStop/group/embed-certs/serial/Pause 0.1
307 TestStartStop/group/newest-cni/serial/FirstStart 11.71
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.14
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
317 TestStartStop/group/newest-cni/serial/SecondStart 5.25
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (17.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-586000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-586000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.849436541s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e6c41a06-7f97-483e-85c4-73ac8a48c4ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-586000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7359455-edc9-4748-b536-8d76bcc4c83b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19052"}}
	{"specversion":"1.0","id":"5f3974da-9d6e-473b-94b7-e25c17502e74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig"}}
	{"specversion":"1.0","id":"c0c21eb1-91c2-42a8-a028-b5318c7dc535","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"25da64d3-0475-4e74-b05b-dad6378483e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb757f1b-d919-4bb3-8a56-4b8343e3fd25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube"}}
	{"specversion":"1.0","id":"fa4ad4d2-a3c8-448d-ba1e-9a5ca3e4b066","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6fdd7749-aa22-48ec-bf2a-9a9aeab89797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5624197e-dbdd-4a87-958b-d5ff3febcc84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f1dc8e6b-5541-4921-9606-94a7871fd259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd7a6042-fbfa-4623-a5d0-d12893c1faea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-586000\" primary control-plane node in \"download-only-586000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d67a87b-14e2-40f8-b949-38cd950bb88d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4722f7fa-1217-4b79-bdc1-cf9d5b4c2775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900] Decompressors:map[bz2:0x140007679b0 gz:0x140007679b8 tar:0x14000767960 tar.bz2:0x14000767970 tar.gz:0x14000767980 tar.xz:0x14000767990 tar.zst:0x140007679a0 tbz2:0x14000767970 tgz:0x1
4000767980 txz:0x14000767990 tzst:0x140007679a0 xz:0x140007679c0 zip:0x140007679d0 zst:0x140007679c8] Getters:map[file:0x14000062c60 http:0x14000884280 https:0x140008842d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"c592014c-34f5-40f1-a1c2-97482ca92788","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:15:56.387407   14787 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:15:56.387601   14787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:15:56.387604   14787 out.go:304] Setting ErrFile to fd 2...
	I0610 04:15:56.387606   14787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:15:56.387727   14787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	W0610 04:15:56.387825   14787 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19052-14289/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19052-14289/.minikube/config/config.json: no such file or directory
	I0610 04:15:56.389118   14787 out.go:298] Setting JSON to true
	I0610 04:15:56.407434   14787 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8127,"bootTime":1718010029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:15:56.407515   14787 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:15:56.413097   14787 out.go:97] [download-only-586000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:15:56.416076   14787 out.go:169] MINIKUBE_LOCATION=19052
	W0610 04:15:56.413192   14787 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 04:15:56.413246   14787 notify.go:220] Checking for updates...
	I0610 04:15:56.424042   14787 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:15:56.427061   14787 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:15:56.428635   14787 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:15:56.432104   14787 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	W0610 04:15:56.438056   14787 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 04:15:56.438325   14787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:15:56.442005   14787 out.go:97] Using the qemu2 driver based on user configuration
	I0610 04:15:56.442026   14787 start.go:297] selected driver: qemu2
	I0610 04:15:56.442042   14787 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:15:56.442133   14787 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:15:56.445034   14787 out.go:169] Automatically selected the socket_vmnet network
	I0610 04:15:56.450506   14787 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 04:15:56.450617   14787 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 04:15:56.450671   14787 cni.go:84] Creating CNI manager for ""
	I0610 04:15:56.450690   14787 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 04:15:56.450744   14787 start.go:340] cluster config:
	{Name:download-only-586000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:15:56.455479   14787 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:15:56.460041   14787 out.go:97] Downloading VM boot image ...
	I0610 04:15:56.460076   14787 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso
	I0610 04:16:04.036447   14787 out.go:97] Starting "download-only-586000" primary control-plane node in "download-only-586000" cluster
	I0610 04:16:04.036472   14787 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:16:04.127660   14787 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 04:16:04.127686   14787 cache.go:56] Caching tarball of preloaded images
	I0610 04:16:04.127900   14787 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:16:04.133090   14787 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0610 04:16:04.133101   14787 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:04.365315   14787 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 04:16:13.073750   14787 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:13.073919   14787 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:13.769012   14787 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 04:16:13.769226   14787 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/download-only-586000/config.json ...
	I0610 04:16:13.769247   14787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/download-only-586000/config.json: {Name:mke2151e3aeea21948ac232c5b18ed83ea85d69a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:16:13.769503   14787 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:16:13.770490   14787 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0610 04:16:14.156017   14787 out.go:169] 
	W0610 04:16:14.162128   14787 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900] Decompressors:map[bz2:0x140007679b0 gz:0x140007679b8 tar:0x14000767960 tar.bz2:0x14000767970 tar.gz:0x14000767980 tar.xz:0x14000767990 tar.zst:0x140007679a0 tbz2:0x14000767970 tgz:0x14000767980 txz:0x14000767990 tzst:0x140007679a0 xz:0x140007679c0 zip:0x140007679d0 zst:0x140007679c8] Getters:map[file:0x14000062c60 http:0x14000884280 https:0x140008842d0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0610 04:16:14.162153   14787 out_reason.go:110] 
	W0610 04:16:14.170025   14787 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:16:14.173998   14787 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-586000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (17.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-306000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-306000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.854589541s)

                                                
                                                
-- stdout --
	* [offline-docker-306000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-306000" primary control-plane node in "offline-docker-306000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-306000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:27:31.964233   16348 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:27:31.964384   16348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:27:31.964387   16348 out.go:304] Setting ErrFile to fd 2...
	I0610 04:27:31.964390   16348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:27:31.964537   16348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:27:31.965783   16348 out.go:298] Setting JSON to false
	I0610 04:27:31.983562   16348 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8822,"bootTime":1718010029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:27:31.983646   16348 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:27:31.988003   16348 out.go:177] * [offline-docker-306000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:27:31.996140   16348 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:27:31.996146   16348 notify.go:220] Checking for updates...
	I0610 04:27:32.003014   16348 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:27:32.006073   16348 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:27:32.009030   16348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:27:32.012032   16348 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:27:32.015041   16348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:27:32.018478   16348 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:27:32.018540   16348 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:27:32.023020   16348 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:27:32.030020   16348 start.go:297] selected driver: qemu2
	I0610 04:27:32.030032   16348 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:27:32.030042   16348 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:27:32.032029   16348 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:27:32.035032   16348 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:27:32.038134   16348 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:27:32.038164   16348 cni.go:84] Creating CNI manager for ""
	I0610 04:27:32.038172   16348 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:27:32.038175   16348 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:27:32.038209   16348 start.go:340] cluster config:
	{Name:offline-docker-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:27:32.042660   16348 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:27:32.050033   16348 out.go:177] * Starting "offline-docker-306000" primary control-plane node in "offline-docker-306000" cluster
	I0610 04:27:32.052999   16348 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:27:32.053025   16348 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:27:32.053035   16348 cache.go:56] Caching tarball of preloaded images
	I0610 04:27:32.053111   16348 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:27:32.053117   16348 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:27:32.053181   16348 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/offline-docker-306000/config.json ...
	I0610 04:27:32.053191   16348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/offline-docker-306000/config.json: {Name:mk23a8e23f498ba95cc5a4b2b6e9e28df89ef9f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:27:32.053495   16348 start.go:360] acquireMachinesLock for offline-docker-306000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:27:32.053525   16348 start.go:364] duration metric: took 24.709µs to acquireMachinesLock for "offline-docker-306000"
	I0610 04:27:32.053536   16348 start.go:93] Provisioning new machine with config: &{Name:offline-docker-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:27:32.053563   16348 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:27:32.058028   16348 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 04:27:32.073650   16348 start.go:159] libmachine.API.Create for "offline-docker-306000" (driver="qemu2")
	I0610 04:27:32.073679   16348 client.go:168] LocalClient.Create starting
	I0610 04:27:32.073746   16348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:27:32.073777   16348 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:32.073787   16348 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:32.073838   16348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:27:32.073861   16348 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:32.073868   16348 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:32.074231   16348 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:27:32.221466   16348 main.go:141] libmachine: Creating SSH key...
	I0610 04:27:32.365042   16348 main.go:141] libmachine: Creating Disk image...
	I0610 04:27:32.365052   16348 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:27:32.365299   16348 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2
	I0610 04:27:32.383318   16348 main.go:141] libmachine: STDOUT: 
	I0610 04:27:32.383342   16348 main.go:141] libmachine: STDERR: 
	I0610 04:27:32.383429   16348 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2 +20000M
	I0610 04:27:32.397450   16348 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:27:32.397486   16348 main.go:141] libmachine: STDERR: 
	I0610 04:27:32.397500   16348 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2
	I0610 04:27:32.397510   16348 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:27:32.397546   16348 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:9d:e1:f3:06:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2
	I0610 04:27:32.399458   16348 main.go:141] libmachine: STDOUT: 
	I0610 04:27:32.399473   16348 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:27:32.399493   16348 client.go:171] duration metric: took 325.805125ms to LocalClient.Create
	I0610 04:27:34.401573   16348 start.go:128] duration metric: took 2.34798725s to createHost
	I0610 04:27:34.401594   16348 start.go:83] releasing machines lock for "offline-docker-306000", held for 2.348047666s
	W0610 04:27:34.401608   16348 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:34.407076   16348 out.go:177] * Deleting "offline-docker-306000" in qemu2 ...
	W0610 04:27:34.417304   16348 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:34.417316   16348 start.go:728] Will try again in 5 seconds ...
	I0610 04:27:39.419588   16348 start.go:360] acquireMachinesLock for offline-docker-306000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:27:39.420067   16348 start.go:364] duration metric: took 351.291µs to acquireMachinesLock for "offline-docker-306000"
	I0610 04:27:39.420208   16348 start.go:93] Provisioning new machine with config: &{Name:offline-docker-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:27:39.420551   16348 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:27:39.426325   16348 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 04:27:39.475481   16348 start.go:159] libmachine.API.Create for "offline-docker-306000" (driver="qemu2")
	I0610 04:27:39.475541   16348 client.go:168] LocalClient.Create starting
	I0610 04:27:39.475648   16348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:27:39.475711   16348 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:39.475735   16348 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:39.475818   16348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:27:39.475861   16348 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:39.475880   16348 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:39.477108   16348 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:27:39.664509   16348 main.go:141] libmachine: Creating SSH key...
	I0610 04:27:39.709682   16348 main.go:141] libmachine: Creating Disk image...
	I0610 04:27:39.709687   16348 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:27:39.709875   16348 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2
	I0610 04:27:39.722217   16348 main.go:141] libmachine: STDOUT: 
	I0610 04:27:39.722239   16348 main.go:141] libmachine: STDERR: 
	I0610 04:27:39.722292   16348 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2 +20000M
	I0610 04:27:39.733265   16348 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:27:39.733289   16348 main.go:141] libmachine: STDERR: 
	I0610 04:27:39.733300   16348 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2
	I0610 04:27:39.733304   16348 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:27:39.733340   16348 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:18:55:20:98:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/offline-docker-306000/disk.qcow2
	I0610 04:27:39.734960   16348 main.go:141] libmachine: STDOUT: 
	I0610 04:27:39.734984   16348 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:27:39.734999   16348 client.go:171] duration metric: took 259.451167ms to LocalClient.Create
	I0610 04:27:41.737197   16348 start.go:128] duration metric: took 2.316601167s to createHost
	I0610 04:27:41.737292   16348 start.go:83] releasing machines lock for "offline-docker-306000", held for 2.317159333s
	W0610 04:27:41.737632   16348 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-306000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:41.753375   16348 out.go:177] 
	W0610 04:27:41.758332   16348 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:27:41.758356   16348 out.go:239] * 
	* 
	W0610 04:27:41.760927   16348 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:27:41.774272   16348 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-306000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-06-10 04:27:41.790394 -0700 PDT m=+705.485558918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-306000 -n offline-docker-306000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-306000 -n offline-docker-306000: exit status 7 (66.518958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-306000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-306000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-306000
--- FAIL: TestOffline (10.05s)

                                                
                                    
x
+
TestAddons/Setup (10.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-057000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-057000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.726001625s)

                                                
                                                
-- stdout --
	* [addons-057000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-057000" primary control-plane node in "addons-057000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-057000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:16:26.437113   14897 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:16:26.437237   14897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:16:26.437241   14897 out.go:304] Setting ErrFile to fd 2...
	I0610 04:16:26.437244   14897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:16:26.437407   14897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:16:26.438681   14897 out.go:298] Setting JSON to false
	I0610 04:16:26.455492   14897 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8157,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:16:26.455562   14897 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:16:26.460881   14897 out.go:177] * [addons-057000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:16:26.462386   14897 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:16:26.462423   14897 notify.go:220] Checking for updates...
	I0610 04:16:26.464814   14897 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:16:26.470892   14897 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:16:26.472453   14897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:16:26.475773   14897 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:16:26.478808   14897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:16:26.482003   14897 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:16:26.485819   14897 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:16:26.492795   14897 start.go:297] selected driver: qemu2
	I0610 04:16:26.492800   14897 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:16:26.492804   14897 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:16:26.494977   14897 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:16:26.498764   14897 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:16:26.501904   14897 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:16:26.501941   14897 cni.go:84] Creating CNI manager for ""
	I0610 04:16:26.501956   14897 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:16:26.501959   14897 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:16:26.501992   14897 start.go:340] cluster config:
	{Name:addons-057000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:16:26.506379   14897 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:16:26.513788   14897 out.go:177] * Starting "addons-057000" primary control-plane node in "addons-057000" cluster
	I0610 04:16:26.517788   14897 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:16:26.517799   14897 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:16:26.517807   14897 cache.go:56] Caching tarball of preloaded images
	I0610 04:16:26.517862   14897 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:16:26.517874   14897 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:16:26.518084   14897 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/addons-057000/config.json ...
	I0610 04:16:26.518094   14897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/addons-057000/config.json: {Name:mk4c32b86ab67b43c1d173c01a4a7437d1685151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:16:26.518689   14897 start.go:360] acquireMachinesLock for addons-057000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:16:26.518757   14897 start.go:364] duration metric: took 61.5µs to acquireMachinesLock for "addons-057000"
	I0610 04:16:26.518768   14897 start.go:93] Provisioning new machine with config: &{Name:addons-057000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:16:26.518794   14897 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:16:26.523793   14897 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 04:16:26.543095   14897 start.go:159] libmachine.API.Create for "addons-057000" (driver="qemu2")
	I0610 04:16:26.543137   14897 client.go:168] LocalClient.Create starting
	I0610 04:16:26.543276   14897 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:16:26.611118   14897 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:16:26.720910   14897 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:16:27.444260   14897 main.go:141] libmachine: Creating SSH key...
	I0610 04:16:27.557430   14897 main.go:141] libmachine: Creating Disk image...
	I0610 04:16:27.557435   14897 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:16:27.557602   14897 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2
	I0610 04:16:27.570989   14897 main.go:141] libmachine: STDOUT: 
	I0610 04:16:27.571008   14897 main.go:141] libmachine: STDERR: 
	I0610 04:16:27.571063   14897 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2 +20000M
	I0610 04:16:27.582010   14897 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:16:27.582024   14897 main.go:141] libmachine: STDERR: 
	I0610 04:16:27.582040   14897 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2
	I0610 04:16:27.582045   14897 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:16:27.582086   14897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:b1:07:b3:48:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2
	I0610 04:16:27.583763   14897 main.go:141] libmachine: STDOUT: 
	I0610 04:16:27.583778   14897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:16:27.583799   14897 client.go:171] duration metric: took 1.040652333s to LocalClient.Create
	I0610 04:16:29.586060   14897 start.go:128] duration metric: took 3.06717675s to createHost
	I0610 04:16:29.586129   14897 start.go:83] releasing machines lock for "addons-057000", held for 3.0673535s
	W0610 04:16:29.586184   14897 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:16:29.597090   14897 out.go:177] * Deleting "addons-057000" in qemu2 ...
	W0610 04:16:29.632558   14897 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:16:29.632595   14897 start.go:728] Will try again in 5 seconds ...
	I0610 04:16:34.634802   14897 start.go:360] acquireMachinesLock for addons-057000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:16:34.635328   14897 start.go:364] duration metric: took 423.166µs to acquireMachinesLock for "addons-057000"
	I0610 04:16:34.635457   14897 start.go:93] Provisioning new machine with config: &{Name:addons-057000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:16:34.635770   14897 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:16:34.647210   14897 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 04:16:34.700226   14897 start.go:159] libmachine.API.Create for "addons-057000" (driver="qemu2")
	I0610 04:16:34.700288   14897 client.go:168] LocalClient.Create starting
	I0610 04:16:34.700431   14897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:16:34.700485   14897 main.go:141] libmachine: Decoding PEM data...
	I0610 04:16:34.700501   14897 main.go:141] libmachine: Parsing certificate...
	I0610 04:16:34.700620   14897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:16:34.700666   14897 main.go:141] libmachine: Decoding PEM data...
	I0610 04:16:34.700682   14897 main.go:141] libmachine: Parsing certificate...
	I0610 04:16:34.701339   14897 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:16:34.961178   14897 main.go:141] libmachine: Creating SSH key...
	I0610 04:16:35.063329   14897 main.go:141] libmachine: Creating Disk image...
	I0610 04:16:35.063334   14897 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:16:35.063521   14897 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2
	I0610 04:16:35.075856   14897 main.go:141] libmachine: STDOUT: 
	I0610 04:16:35.075875   14897 main.go:141] libmachine: STDERR: 
	I0610 04:16:35.075935   14897 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2 +20000M
	I0610 04:16:35.087009   14897 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:16:35.087032   14897 main.go:141] libmachine: STDERR: 
	I0610 04:16:35.087052   14897 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2
	I0610 04:16:35.087056   14897 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:16:35.087096   14897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:df:3f:2b:b8:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/addons-057000/disk.qcow2
	I0610 04:16:35.088940   14897 main.go:141] libmachine: STDOUT: 
	I0610 04:16:35.088955   14897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:16:35.088970   14897 client.go:171] duration metric: took 388.673667ms to LocalClient.Create
	I0610 04:16:37.090872   14897 start.go:128] duration metric: took 2.455057459s to createHost
	I0610 04:16:37.090948   14897 start.go:83] releasing machines lock for "addons-057000", held for 2.45558625s
	W0610 04:16:37.091287   14897 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-057000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-057000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:16:37.100937   14897 out.go:177] 
	W0610 04:16:37.109076   14897 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:16:37.109109   14897 out.go:239] * 
	* 
	W0610 04:16:37.111809   14897 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:16:37.120883   14897 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-057000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.73s)

                                                
                                    
x
+
TestCertOptions (10.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-160000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-160000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.903144541s)

                                                
                                                
-- stdout --
	* [cert-options-160000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-160000" primary control-plane node in "cert-options-160000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-160000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-160000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-160000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-160000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-160000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.813792ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-160000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-160000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-160000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-160000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-160000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-160000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.456125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-160000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-160000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-160000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-160000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-160000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-06-10 04:39:29.647614 -0700 PDT m=+1413.337858751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-160000 -n cert-options-160000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-160000 -n cert-options-160000: exit status 7 (30.5195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-160000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-160000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-160000
--- FAIL: TestCertOptions (10.19s)

                                                
                                    
x
+
TestCertExpiration (197.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-472000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-472000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.201107292s)

                                                
                                                
-- stdout --
	* [cert-expiration-472000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-472000" primary control-plane node in "cert-expiration-472000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-472000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-472000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-472000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.229923875s)

                                                
                                                
-- stdout --
	* [cert-expiration-472000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-472000" primary control-plane node in "cert-expiration-472000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-472000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-472000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-472000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-472000" primary control-plane node in "cert-expiration-472000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-472000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-06-10 04:42:14.695828 -0700 PDT m=+1578.417630168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-472000 -n cert-expiration-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-472000 -n cert-expiration-472000: exit status 7 (66.925708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-472000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-472000
--- FAIL: TestCertExpiration (197.60s)

                                                
                                    
x
+
TestDockerFlags (12.46s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-585000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-585000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.202573709s)

                                                
                                                
-- stdout --
	* [docker-flags-585000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-585000" primary control-plane node in "docker-flags-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:39:07.159233   17076 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:39:07.159424   17076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:39:07.159427   17076 out.go:304] Setting ErrFile to fd 2...
	I0610 04:39:07.159430   17076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:39:07.159579   17076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:39:07.160896   17076 out.go:298] Setting JSON to false
	I0610 04:39:07.180414   17076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9518,"bootTime":1718010029,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:39:07.180509   17076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:39:07.232203   17076 out.go:177] * [docker-flags-585000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:39:07.250232   17076 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:39:07.243395   17076 notify.go:220] Checking for updates...
	I0610 04:39:07.262122   17076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:39:07.269260   17076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:39:07.278276   17076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:39:07.281210   17076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:39:07.285212   17076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:39:07.288625   17076 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:39:07.288700   17076 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:39:07.288760   17076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:39:07.293140   17076 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:39:07.300229   17076 start.go:297] selected driver: qemu2
	I0610 04:39:07.300233   17076 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:39:07.300237   17076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:39:07.302320   17076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:39:07.305189   17076 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:39:07.308305   17076 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0610 04:39:07.308332   17076 cni.go:84] Creating CNI manager for ""
	I0610 04:39:07.308337   17076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:39:07.308340   17076 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:39:07.308370   17076 start.go:340] cluster config:
	{Name:docker-flags-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:39:07.312407   17076 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:39:07.319220   17076 out.go:177] * Starting "docker-flags-585000" primary control-plane node in "docker-flags-585000" cluster
	I0610 04:39:07.323259   17076 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:39:07.323271   17076 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:39:07.323276   17076 cache.go:56] Caching tarball of preloaded images
	I0610 04:39:07.323329   17076 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:39:07.323335   17076 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:39:07.323391   17076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/docker-flags-585000/config.json ...
	I0610 04:39:07.323401   17076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/docker-flags-585000/config.json: {Name:mkf6644156062d4d5f710384d4a23d4a69d72df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:39:07.323669   17076 start.go:360] acquireMachinesLock for docker-flags-585000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:39:09.453295   17076 start.go:364] duration metric: took 2.129587292s to acquireMachinesLock for "docker-flags-585000"
	I0610 04:39:09.453496   17076 start.go:93] Provisioning new machine with config: &{Name:docker-flags-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:39:09.453793   17076 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:39:09.463595   17076 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 04:39:09.507648   17076 start.go:159] libmachine.API.Create for "docker-flags-585000" (driver="qemu2")
	I0610 04:39:09.507698   17076 client.go:168] LocalClient.Create starting
	I0610 04:39:09.507856   17076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:39:09.507924   17076 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:09.507947   17076 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:09.508018   17076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:39:09.508064   17076 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:09.508078   17076 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:09.508777   17076 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:39:09.662250   17076 main.go:141] libmachine: Creating SSH key...
	I0610 04:39:09.749791   17076 main.go:141] libmachine: Creating Disk image...
	I0610 04:39:09.749796   17076 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:39:09.749977   17076 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2
	I0610 04:39:09.762396   17076 main.go:141] libmachine: STDOUT: 
	I0610 04:39:09.762415   17076 main.go:141] libmachine: STDERR: 
	I0610 04:39:09.762477   17076 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2 +20000M
	I0610 04:39:09.773355   17076 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:39:09.773378   17076 main.go:141] libmachine: STDERR: 
	I0610 04:39:09.773403   17076 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2
	I0610 04:39:09.773408   17076 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:39:09.773440   17076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:03:40:01:ce:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2
	I0610 04:39:09.775132   17076 main.go:141] libmachine: STDOUT: 
	I0610 04:39:09.775154   17076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:39:09.775170   17076 client.go:171] duration metric: took 267.463959ms to LocalClient.Create
	I0610 04:39:11.777377   17076 start.go:128] duration metric: took 2.323517333s to createHost
	I0610 04:39:11.777439   17076 start.go:83] releasing machines lock for "docker-flags-585000", held for 2.324057084s
	W0610 04:39:11.777514   17076 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:11.792845   17076 out.go:177] * Deleting "docker-flags-585000" in qemu2 ...
	W0610 04:39:11.821947   17076 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:11.821982   17076 start.go:728] Will try again in 5 seconds ...
	I0610 04:39:16.824262   17076 start.go:360] acquireMachinesLock for docker-flags-585000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:39:16.824833   17076 start.go:364] duration metric: took 461.583µs to acquireMachinesLock for "docker-flags-585000"
	I0610 04:39:16.824981   17076 start.go:93] Provisioning new machine with config: &{Name:docker-flags-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:39:16.825208   17076 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:39:16.839955   17076 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 04:39:16.891296   17076 start.go:159] libmachine.API.Create for "docker-flags-585000" (driver="qemu2")
	I0610 04:39:16.891357   17076 client.go:168] LocalClient.Create starting
	I0610 04:39:16.891472   17076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:39:16.891541   17076 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:16.891560   17076 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:16.891623   17076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:39:16.891667   17076 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:16.891678   17076 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:16.892238   17076 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:39:17.051209   17076 main.go:141] libmachine: Creating SSH key...
	I0610 04:39:17.253769   17076 main.go:141] libmachine: Creating Disk image...
	I0610 04:39:17.253776   17076 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:39:17.254164   17076 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2
	I0610 04:39:17.267297   17076 main.go:141] libmachine: STDOUT: 
	I0610 04:39:17.267316   17076 main.go:141] libmachine: STDERR: 
	I0610 04:39:17.267365   17076 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2 +20000M
	I0610 04:39:17.278423   17076 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:39:17.278442   17076 main.go:141] libmachine: STDERR: 
	I0610 04:39:17.278456   17076 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2
	I0610 04:39:17.278459   17076 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:39:17.278494   17076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f9:0e:70:c2:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/docker-flags-585000/disk.qcow2
	I0610 04:39:17.280190   17076 main.go:141] libmachine: STDOUT: 
	I0610 04:39:17.280204   17076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:39:17.280217   17076 client.go:171] duration metric: took 388.852ms to LocalClient.Create
	I0610 04:39:19.282447   17076 start.go:128] duration metric: took 2.4571795s to createHost
	I0610 04:39:19.282527   17076 start.go:83] releasing machines lock for "docker-flags-585000", held for 2.4576545s
	W0610 04:39:19.282914   17076 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:19.295560   17076 out.go:177] 
	W0610 04:39:19.299616   17076 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:39:19.299761   17076 out.go:239] * 
	* 
	W0610 04:39:19.302585   17076 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:39:19.314574   17076 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-585000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-585000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-585000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.633709ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-585000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-585000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-585000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-585000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-585000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-585000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-585000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-585000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-585000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.734917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-585000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-585000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-585000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-585000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-585000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-585000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-06-10 04:39:19.455178 -0700 PDT m=+1403.145493418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-585000 -n docker-flags-585000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-585000 -n docker-flags-585000: exit status 7 (29.82375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-585000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-585000
--- FAIL: TestDockerFlags (12.46s)

                                                
                                    
x
+
TestForceSystemdFlag (11.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-540000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-540000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.774662292s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-540000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-540000" primary control-plane node in "force-systemd-flag-540000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-540000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:38:30.377216   16909 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:38:30.377356   16909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:38:30.377360   16909 out.go:304] Setting ErrFile to fd 2...
	I0610 04:38:30.377362   16909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:38:30.377502   16909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:38:30.378586   16909 out.go:298] Setting JSON to false
	I0610 04:38:30.394716   16909 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9481,"bootTime":1718010029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:38:30.394777   16909 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:38:30.399383   16909 out.go:177] * [force-systemd-flag-540000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:38:30.405321   16909 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:38:30.405383   16909 notify.go:220] Checking for updates...
	I0610 04:38:30.409374   16909 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:38:30.413361   16909 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:38:30.416429   16909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:38:30.419392   16909 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:38:30.422371   16909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:38:30.425706   16909 config.go:182] Loaded profile config "NoKubernetes-448000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:38:30.425781   16909 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:38:30.425827   16909 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:38:30.430340   16909 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:38:30.436344   16909 start.go:297] selected driver: qemu2
	I0610 04:38:30.436350   16909 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:38:30.436357   16909 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:38:30.438656   16909 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:38:30.441312   16909 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:38:30.445410   16909 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 04:38:30.445449   16909 cni.go:84] Creating CNI manager for ""
	I0610 04:38:30.445468   16909 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:38:30.445472   16909 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:38:30.445505   16909 start.go:340] cluster config:
	{Name:force-systemd-flag-540000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:38:30.450002   16909 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:38:30.458407   16909 out.go:177] * Starting "force-systemd-flag-540000" primary control-plane node in "force-systemd-flag-540000" cluster
	I0610 04:38:30.462214   16909 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:38:30.462229   16909 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:38:30.462237   16909 cache.go:56] Caching tarball of preloaded images
	I0610 04:38:30.462308   16909 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:38:30.462313   16909 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:38:30.462371   16909 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/force-systemd-flag-540000/config.json ...
	I0610 04:38:30.462383   16909 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/force-systemd-flag-540000/config.json: {Name:mk9be944e5d09d4495d67991fe94b67e55ac2718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:38:30.462723   16909 start.go:360] acquireMachinesLock for force-systemd-flag-540000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:38:31.868329   16909 start.go:364] duration metric: took 1.405572792s to acquireMachinesLock for "force-systemd-flag-540000"
	I0610 04:38:31.868502   16909 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:38:31.868692   16909 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:38:31.879087   16909 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 04:38:31.929732   16909 start.go:159] libmachine.API.Create for "force-systemd-flag-540000" (driver="qemu2")
	I0610 04:38:31.929785   16909 client.go:168] LocalClient.Create starting
	I0610 04:38:31.929897   16909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:38:31.929959   16909 main.go:141] libmachine: Decoding PEM data...
	I0610 04:38:31.929982   16909 main.go:141] libmachine: Parsing certificate...
	I0610 04:38:31.930058   16909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:38:31.930102   16909 main.go:141] libmachine: Decoding PEM data...
	I0610 04:38:31.930121   16909 main.go:141] libmachine: Parsing certificate...
	I0610 04:38:31.930814   16909 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:38:32.281449   16909 main.go:141] libmachine: Creating SSH key...
	I0610 04:38:32.459953   16909 main.go:141] libmachine: Creating Disk image...
	I0610 04:38:32.459960   16909 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:38:32.460150   16909 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2
	I0610 04:38:32.474128   16909 main.go:141] libmachine: STDOUT: 
	I0610 04:38:32.474150   16909 main.go:141] libmachine: STDERR: 
	I0610 04:38:32.474216   16909 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2 +20000M
	I0610 04:38:32.485734   16909 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:38:32.485761   16909 main.go:141] libmachine: STDERR: 
	I0610 04:38:32.485780   16909 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2
	I0610 04:38:32.485785   16909 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:38:32.485824   16909 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:7e:19:e3:dd:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2
	I0610 04:38:32.487568   16909 main.go:141] libmachine: STDOUT: 
	I0610 04:38:32.487581   16909 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:38:32.487600   16909 client.go:171] duration metric: took 557.803834ms to LocalClient.Create
	I0610 04:38:34.489806   16909 start.go:128] duration metric: took 2.621062167s to createHost
	I0610 04:38:34.489866   16909 start.go:83] releasing machines lock for "force-systemd-flag-540000", held for 2.621454041s
	W0610 04:38:34.489931   16909 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:38:34.512447   16909 out.go:177] * Deleting "force-systemd-flag-540000" in qemu2 ...
	W0610 04:38:34.542535   16909 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:38:34.542569   16909 start.go:728] Will try again in 5 seconds ...
	I0610 04:38:39.544865   16909 start.go:360] acquireMachinesLock for force-systemd-flag-540000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:38:39.555499   16909 start.go:364] duration metric: took 10.552166ms to acquireMachinesLock for "force-systemd-flag-540000"
	I0610 04:38:39.555561   16909 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-540000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-540000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:38:39.555779   16909 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:38:39.565702   16909 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 04:38:39.613727   16909 start.go:159] libmachine.API.Create for "force-systemd-flag-540000" (driver="qemu2")
	I0610 04:38:39.613781   16909 client.go:168] LocalClient.Create starting
	I0610 04:38:39.613894   16909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:38:39.613967   16909 main.go:141] libmachine: Decoding PEM data...
	I0610 04:38:39.613983   16909 main.go:141] libmachine: Parsing certificate...
	I0610 04:38:39.614055   16909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:38:39.614099   16909 main.go:141] libmachine: Decoding PEM data...
	I0610 04:38:39.614113   16909 main.go:141] libmachine: Parsing certificate...
	I0610 04:38:39.614623   16909 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:38:39.938245   16909 main.go:141] libmachine: Creating SSH key...
	I0610 04:38:40.048256   16909 main.go:141] libmachine: Creating Disk image...
	I0610 04:38:40.048262   16909 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:38:40.048437   16909 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2
	I0610 04:38:40.060795   16909 main.go:141] libmachine: STDOUT: 
	I0610 04:38:40.060818   16909 main.go:141] libmachine: STDERR: 
	I0610 04:38:40.060870   16909 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2 +20000M
	I0610 04:38:40.072010   16909 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:38:40.072024   16909 main.go:141] libmachine: STDERR: 
	I0610 04:38:40.072039   16909 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2
	I0610 04:38:40.072044   16909 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:38:40.072076   16909 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:d7:dc:07:47:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-flag-540000/disk.qcow2
	I0610 04:38:40.073819   16909 main.go:141] libmachine: STDOUT: 
	I0610 04:38:40.073834   16909 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:38:40.073854   16909 client.go:171] duration metric: took 460.064584ms to LocalClient.Create
	I0610 04:38:42.074971   16909 start.go:128] duration metric: took 2.519143375s to createHost
	I0610 04:38:42.075016   16909 start.go:83] releasing machines lock for "force-systemd-flag-540000", held for 2.519476125s
	W0610 04:38:42.075346   16909 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:38:42.082725   16909 out.go:177] 
	W0610 04:38:42.095815   16909 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:38:42.095883   16909 out.go:239] * 
	* 
	W0610 04:38:42.098644   16909 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:38:42.106801   16909 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-540000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-540000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-540000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.243459ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-540000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-540000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-540000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-06-10 04:38:42.199508 -0700 PDT m=+1365.890081876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-540000 -n force-systemd-flag-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-540000 -n force-systemd-flag-540000: exit status 7 (34.490125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-540000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-540000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-540000
--- FAIL: TestForceSystemdFlag (11.99s)

                                                
                                    
x
+
TestForceSystemdEnv (10.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-423000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-423000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.0240925s)

                                                
                                                
-- stdout --
	* [force-systemd-env-423000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-423000" primary control-plane node in "force-systemd-env-423000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-423000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:38:56.802491   17031 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:38:56.802632   17031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:38:56.802637   17031 out.go:304] Setting ErrFile to fd 2...
	I0610 04:38:56.802640   17031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:38:56.802774   17031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:38:56.803888   17031 out.go:298] Setting JSON to false
	I0610 04:38:56.820281   17031 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9507,"bootTime":1718010029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:38:56.820346   17031 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:38:56.826501   17031 out.go:177] * [force-systemd-env-423000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:38:56.839349   17031 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:38:56.834496   17031 notify.go:220] Checking for updates...
	I0610 04:38:56.847425   17031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:38:56.855388   17031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:38:56.864271   17031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:38:56.877436   17031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:38:56.885281   17031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0610 04:38:56.893076   17031 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:38:56.893132   17031 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:38:56.897474   17031 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:38:56.904402   17031 start.go:297] selected driver: qemu2
	I0610 04:38:56.904408   17031 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:38:56.904413   17031 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:38:56.906756   17031 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:38:56.910459   17031 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:38:56.914504   17031 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 04:38:56.914518   17031 cni.go:84] Creating CNI manager for ""
	I0610 04:38:56.914526   17031 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:38:56.914534   17031 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:38:56.914560   17031 start.go:340] cluster config:
	{Name:force-systemd-env-423000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:38:56.919493   17031 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:38:56.926400   17031 out.go:177] * Starting "force-systemd-env-423000" primary control-plane node in "force-systemd-env-423000" cluster
	I0610 04:38:56.930448   17031 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:38:56.930467   17031 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:38:56.930477   17031 cache.go:56] Caching tarball of preloaded images
	I0610 04:38:56.930546   17031 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:38:56.930551   17031 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:38:56.930635   17031 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/force-systemd-env-423000/config.json ...
	I0610 04:38:56.930646   17031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/force-systemd-env-423000/config.json: {Name:mke8a47db103d9249ee97f2effa7b7c00083dfee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:38:56.930874   17031 start.go:360] acquireMachinesLock for force-systemd-env-423000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:38:56.930913   17031 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "force-systemd-env-423000"
	I0610 04:38:56.930924   17031 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:38:56.930951   17031 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:38:56.940391   17031 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 04:38:56.958218   17031 start.go:159] libmachine.API.Create for "force-systemd-env-423000" (driver="qemu2")
	I0610 04:38:56.958248   17031 client.go:168] LocalClient.Create starting
	I0610 04:38:56.958305   17031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:38:56.958337   17031 main.go:141] libmachine: Decoding PEM data...
	I0610 04:38:56.958349   17031 main.go:141] libmachine: Parsing certificate...
	I0610 04:38:56.958384   17031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:38:56.958406   17031 main.go:141] libmachine: Decoding PEM data...
	I0610 04:38:56.958413   17031 main.go:141] libmachine: Parsing certificate...
	I0610 04:38:56.958786   17031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:38:57.108369   17031 main.go:141] libmachine: Creating SSH key...
	I0610 04:38:57.306556   17031 main.go:141] libmachine: Creating Disk image...
	I0610 04:38:57.306567   17031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:38:57.306773   17031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2
	I0610 04:38:57.320584   17031 main.go:141] libmachine: STDOUT: 
	I0610 04:38:57.320602   17031 main.go:141] libmachine: STDERR: 
	I0610 04:38:57.320671   17031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2 +20000M
	I0610 04:38:57.332437   17031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:38:57.332457   17031 main.go:141] libmachine: STDERR: 
	I0610 04:38:57.332481   17031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2
	I0610 04:38:57.332486   17031 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:38:57.332526   17031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:89:35:36:57:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2
	I0610 04:38:57.334300   17031 main.go:141] libmachine: STDOUT: 
	I0610 04:38:57.334312   17031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:38:57.334331   17031 client.go:171] duration metric: took 376.075042ms to LocalClient.Create
	I0610 04:38:59.336575   17031 start.go:128] duration metric: took 2.405578792s to createHost
	I0610 04:38:59.336652   17031 start.go:83] releasing machines lock for "force-systemd-env-423000", held for 2.405711333s
	W0610 04:38:59.336768   17031 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:38:59.353589   17031 out.go:177] * Deleting "force-systemd-env-423000" in qemu2 ...
	W0610 04:38:59.377646   17031 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:38:59.377671   17031 start.go:728] Will try again in 5 seconds ...
	I0610 04:39:04.379968   17031 start.go:360] acquireMachinesLock for force-systemd-env-423000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:39:04.380391   17031 start.go:364] duration metric: took 339µs to acquireMachinesLock for "force-systemd-env-423000"
	I0610 04:39:04.380544   17031 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-423000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-423000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:39:04.380952   17031 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:39:04.386606   17031 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 04:39:04.435914   17031 start.go:159] libmachine.API.Create for "force-systemd-env-423000" (driver="qemu2")
	I0610 04:39:04.435965   17031 client.go:168] LocalClient.Create starting
	I0610 04:39:04.436060   17031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:39:04.436125   17031 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:04.436143   17031 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:04.436234   17031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:39:04.436277   17031 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:04.436289   17031 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:04.436782   17031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:39:04.586318   17031 main.go:141] libmachine: Creating SSH key...
	I0610 04:39:04.725683   17031 main.go:141] libmachine: Creating Disk image...
	I0610 04:39:04.725689   17031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:39:04.725870   17031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2
	I0610 04:39:04.738319   17031 main.go:141] libmachine: STDOUT: 
	I0610 04:39:04.738355   17031 main.go:141] libmachine: STDERR: 
	I0610 04:39:04.738413   17031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2 +20000M
	I0610 04:39:04.749219   17031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:39:04.749236   17031 main.go:141] libmachine: STDERR: 
	I0610 04:39:04.749257   17031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2
	I0610 04:39:04.749264   17031 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:39:04.749305   17031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:34:e3:10:c1:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/force-systemd-env-423000/disk.qcow2
	I0610 04:39:04.751022   17031 main.go:141] libmachine: STDOUT: 
	I0610 04:39:04.751038   17031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:39:04.751051   17031 client.go:171] duration metric: took 315.078584ms to LocalClient.Create
	I0610 04:39:06.753235   17031 start.go:128] duration metric: took 2.372224667s to createHost
	I0610 04:39:06.753284   17031 start.go:83] releasing machines lock for "force-systemd-env-423000", held for 2.372851416s
	W0610 04:39:06.753734   17031 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-423000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-423000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:06.767269   17031 out.go:177] 
	W0610 04:39:06.772530   17031 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:39:06.772568   17031 out.go:239] * 
	* 
	W0610 04:39:06.775354   17031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:39:06.783251   17031 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-423000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-423000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-423000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.546417ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-423000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-423000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-423000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-06-10 04:39:06.88039 -0700 PDT m=+1390.570793001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-423000 -n force-systemd-env-423000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-423000 -n force-systemd-env-423000: exit status 7 (33.916541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-423000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-423000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-423000
--- FAIL: TestForceSystemdEnv (10.35s)

                                                
                                    
x
+
TestErrorSpam/setup (10.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-972000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-972000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 --driver=qemu2 : exit status 80 (10.0144635s)

                                                
                                                
-- stdout --
	* [nospam-972000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-972000" primary control-plane node in "nospam-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-972000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-972000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-972000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=19052
- KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-972000" primary control-plane node in "nospam-972000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-972000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (10.02s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.885206167s)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-296000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52827 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52827 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52827 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-296000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=19052
- KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-296000" primary control-plane node in "functional-296000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-296000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52827 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52827 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52827 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (69.664167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --alsologtostderr -v=8: exit status 80 (5.182675542s)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	* Restarting existing qemu2 VM for "functional-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:17:03.975987   15029 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:17:03.976124   15029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:17:03.976127   15029 out.go:304] Setting ErrFile to fd 2...
	I0610 04:17:03.976130   15029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:17:03.976256   15029 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:17:03.977276   15029 out.go:298] Setting JSON to false
	I0610 04:17:03.993493   15029 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8194,"bootTime":1718010029,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:17:03.993564   15029 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:17:03.998702   15029 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:17:04.005525   15029 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:17:04.005583   15029 notify.go:220] Checking for updates...
	I0610 04:17:04.012641   15029 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:17:04.015561   15029 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:17:04.018615   15029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:17:04.021673   15029 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:17:04.023007   15029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:17:04.026009   15029 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:17:04.026058   15029 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:17:04.030608   15029 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:17:04.035594   15029 start.go:297] selected driver: qemu2
	I0610 04:17:04.035599   15029 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:17:04.035643   15029 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:17:04.037659   15029 cni.go:84] Creating CNI manager for ""
	I0610 04:17:04.037677   15029 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:17:04.037717   15029 start.go:340] cluster config:
	{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:17:04.041950   15029 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:17:04.049610   15029 out.go:177] * Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	I0610 04:17:04.053631   15029 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:17:04.053650   15029 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:17:04.053660   15029 cache.go:56] Caching tarball of preloaded images
	I0610 04:17:04.053728   15029 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:17:04.053733   15029 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:17:04.053812   15029 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/functional-296000/config.json ...
	I0610 04:17:04.054321   15029 start.go:360] acquireMachinesLock for functional-296000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:17:04.054351   15029 start.go:364] duration metric: took 23.541µs to acquireMachinesLock for "functional-296000"
	I0610 04:17:04.054359   15029 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:17:04.054366   15029 fix.go:54] fixHost starting: 
	I0610 04:17:04.054488   15029 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
	W0610 04:17:04.054496   15029 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:17:04.062614   15029 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
	I0610 04:17:04.066648   15029 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:47:9f:b5:65:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/disk.qcow2
	I0610 04:17:04.068613   15029 main.go:141] libmachine: STDOUT: 
	I0610 04:17:04.068634   15029 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:17:04.068664   15029 fix.go:56] duration metric: took 14.296792ms for fixHost
	I0610 04:17:04.068670   15029 start.go:83] releasing machines lock for "functional-296000", held for 14.315042ms
	W0610 04:17:04.068677   15029 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:17:04.068720   15029 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:17:04.068725   15029 start.go:728] Will try again in 5 seconds ...
	I0610 04:17:09.070877   15029 start.go:360] acquireMachinesLock for functional-296000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:17:09.071335   15029 start.go:364] duration metric: took 372.958µs to acquireMachinesLock for "functional-296000"
	I0610 04:17:09.071482   15029 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:17:09.071504   15029 fix.go:54] fixHost starting: 
	I0610 04:17:09.072233   15029 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
	W0610 04:17:09.072260   15029 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:17:09.080750   15029 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
	I0610 04:17:09.085875   15029 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:47:9f:b5:65:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/disk.qcow2
	I0610 04:17:09.095506   15029 main.go:141] libmachine: STDOUT: 
	I0610 04:17:09.095584   15029 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:17:09.095659   15029 fix.go:56] duration metric: took 24.158041ms for fixHost
	I0610 04:17:09.095676   15029 start.go:83] releasing machines lock for "functional-296000", held for 24.317292ms
	W0610 04:17:09.095824   15029 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:17:09.101652   15029 out.go:177] 
	W0610 04:17:09.104801   15029 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:17:09.104836   15029 out.go:239] * 
	* 
	W0610 04:17:09.107499   15029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:17:09.115723   15029 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-296000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.18444325s for "functional-296000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (67.993958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.886458ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-296000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.664875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-296000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-296000 get po -A: exit status 1 (26.278333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-296000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-296000\n"*: args "kubectl --context functional-296000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-296000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.999042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl images: exit status 83 (40.61875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (40.829333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-296000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.952709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.833292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-296000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 kubectl -- --context functional-296000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 kubectl -- --context functional-296000 get pods: exit status 1 (605.388958ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-296000
	* no server found for cluster "functional-296000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-296000 kubectl -- --context functional-296000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (32.06625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-296000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-296000 get pods: exit status 1 (920.945709ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-296000
	* no server found for cluster "functional-296000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-296000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.160125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.189170875s)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	* Restarting existing qemu2 VM for "functional-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-296000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.189752083s for "functional-296000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (69.226875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-296000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-296000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.792958ms)

                                                
                                                
** stderr ** 
	error: context "functional-296000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-296000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.062125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 logs: exit status 83 (77.630125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:15 PDT |                     |
	|         | -p download-only-586000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| delete  | -p download-only-586000                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| start   | -o=json --download-only                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | -p download-only-791000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| delete  | -p download-only-791000                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| delete  | -p download-only-586000                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| delete  | -p download-only-791000                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| start   | --download-only -p                                                       | binary-mirror-722000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | binary-mirror-722000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52803                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-722000                                                  | binary-mirror-722000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| addons  | enable dashboard -p                                                      | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | addons-057000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | addons-057000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-057000 --wait=true                                             | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-057000                                                         | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| start   | -p nospam-972000 -n=1 --memory=2250 --wait=false                         | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-972000                                                         | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
	| cache   | functional-296000 cache delete                                           | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	| ssh     | functional-296000 ssh sudo                                               | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-296000                                                        | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-296000 cache reload                                           | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-296000 kubectl --                                             | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
	|         | --context functional-296000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 04:17:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 04:17:15.507026   15109 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:17:15.507162   15109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:17:15.507164   15109 out.go:304] Setting ErrFile to fd 2...
	I0610 04:17:15.507165   15109 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:17:15.507272   15109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:17:15.508260   15109 out.go:298] Setting JSON to false
	I0610 04:17:15.524725   15109 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8206,"bootTime":1718010029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:17:15.524786   15109 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:17:15.530040   15109 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:17:15.538991   15109 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:17:15.539028   15109 notify.go:220] Checking for updates...
	I0610 04:17:15.545892   15109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:17:15.549938   15109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:17:15.552882   15109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:17:15.555927   15109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:17:15.558973   15109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:17:15.562225   15109 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:17:15.562285   15109 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:17:15.566948   15109 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:17:15.572927   15109 start.go:297] selected driver: qemu2
	I0610 04:17:15.572932   15109 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:17:15.572985   15109 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:17:15.575285   15109 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:17:15.575305   15109 cni.go:84] Creating CNI manager for ""
	I0610 04:17:15.575312   15109 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:17:15.575355   15109 start.go:340] cluster config:
	{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:17:15.579892   15109 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:17:15.588005   15109 out.go:177] * Starting "functional-296000" primary control-plane node in "functional-296000" cluster
	I0610 04:17:15.591937   15109 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:17:15.591951   15109 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:17:15.591959   15109 cache.go:56] Caching tarball of preloaded images
	I0610 04:17:15.592015   15109 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:17:15.592018   15109 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:17:15.592083   15109 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/functional-296000/config.json ...
	I0610 04:17:15.592541   15109 start.go:360] acquireMachinesLock for functional-296000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:17:15.592574   15109 start.go:364] duration metric: took 28µs to acquireMachinesLock for "functional-296000"
	I0610 04:17:15.592580   15109 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:17:15.592585   15109 fix.go:54] fixHost starting: 
	I0610 04:17:15.592703   15109 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
	W0610 04:17:15.592709   15109 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:17:15.600890   15109 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
	I0610 04:17:15.604943   15109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:47:9f:b5:65:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/disk.qcow2
	I0610 04:17:15.606941   15109 main.go:141] libmachine: STDOUT: 
	I0610 04:17:15.606957   15109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:17:15.606988   15109 fix.go:56] duration metric: took 14.402583ms for fixHost
	I0610 04:17:15.606991   15109 start.go:83] releasing machines lock for "functional-296000", held for 14.414416ms
	W0610 04:17:15.606998   15109 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:17:15.607023   15109 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:17:15.607028   15109 start.go:728] Will try again in 5 seconds ...
	I0610 04:17:20.609314   15109 start.go:360] acquireMachinesLock for functional-296000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:17:20.609754   15109 start.go:364] duration metric: took 362.833µs to acquireMachinesLock for "functional-296000"
	I0610 04:17:20.609897   15109 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:17:20.609912   15109 fix.go:54] fixHost starting: 
	I0610 04:17:20.610669   15109 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
	W0610 04:17:20.610691   15109 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:17:20.620126   15109 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
	I0610 04:17:20.624212   15109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:47:9f:b5:65:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/disk.qcow2
	I0610 04:17:20.633688   15109 main.go:141] libmachine: STDOUT: 
	I0610 04:17:20.633732   15109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:17:20.633816   15109 fix.go:56] duration metric: took 23.907291ms for fixHost
	I0610 04:17:20.633828   15109 start.go:83] releasing machines lock for "functional-296000", held for 24.058584ms
	W0610 04:17:20.633975   15109 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:17:20.641184   15109 out.go:177] 
	W0610 04:17:20.645135   15109 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:17:20.645156   15109 out.go:239] * 
	W0610 04:17:20.648277   15109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:17:20.656154   15109 out.go:177] 
	
	
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-296000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:15 PDT |                     |
|         | -p download-only-586000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| delete  | -p download-only-586000                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | -p download-only-791000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| delete  | -p download-only-791000                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| delete  | -p download-only-586000                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| delete  | -p download-only-791000                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| start   | --download-only -p                                                       | binary-mirror-722000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | binary-mirror-722000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52803                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-722000                                                  | binary-mirror-722000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| addons  | enable dashboard -p                                                      | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | addons-057000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | addons-057000                                                            |                      |         |         |                     |                     |
| start   | -p addons-057000 --wait=true                                             | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-057000                                                         | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| start   | -p nospam-972000 -n=1 --memory=2250 --wait=false                         | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-972000                                                         | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
| cache   | functional-296000 cache delete                                           | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
| ssh     | functional-296000 ssh sudo                                               | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-296000                                                        | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-296000 cache reload                                           | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-296000 kubectl --                                             | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | --context functional-296000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/10 04:17:15
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0610 04:17:15.507026   15109 out.go:291] Setting OutFile to fd 1 ...
I0610 04:17:15.507162   15109 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:17:15.507164   15109 out.go:304] Setting ErrFile to fd 2...
I0610 04:17:15.507165   15109 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:17:15.507272   15109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:17:15.508260   15109 out.go:298] Setting JSON to false
I0610 04:17:15.524725   15109 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8206,"bootTime":1718010029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0610 04:17:15.524786   15109 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0610 04:17:15.530040   15109 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0610 04:17:15.538991   15109 out.go:177]   - MINIKUBE_LOCATION=19052
I0610 04:17:15.539028   15109 notify.go:220] Checking for updates...
I0610 04:17:15.545892   15109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
I0610 04:17:15.549938   15109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0610 04:17:15.552882   15109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0610 04:17:15.555927   15109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
I0610 04:17:15.558973   15109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0610 04:17:15.562225   15109 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:17:15.562285   15109 driver.go:392] Setting default libvirt URI to qemu:///system
I0610 04:17:15.566948   15109 out.go:177] * Using the qemu2 driver based on existing profile
I0610 04:17:15.572927   15109 start.go:297] selected driver: qemu2
I0610 04:17:15.572932   15109 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0610 04:17:15.572985   15109 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0610 04:17:15.575285   15109 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0610 04:17:15.575305   15109 cni.go:84] Creating CNI manager for ""
I0610 04:17:15.575312   15109 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0610 04:17:15.575355   15109 start.go:340] cluster config:
{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0610 04:17:15.579892   15109 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0610 04:17:15.588005   15109 out.go:177] * Starting "functional-296000" primary control-plane node in "functional-296000" cluster
I0610 04:17:15.591937   15109 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0610 04:17:15.591951   15109 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0610 04:17:15.591959   15109 cache.go:56] Caching tarball of preloaded images
I0610 04:17:15.592015   15109 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0610 04:17:15.592018   15109 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0610 04:17:15.592083   15109 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/functional-296000/config.json ...
I0610 04:17:15.592541   15109 start.go:360] acquireMachinesLock for functional-296000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 04:17:15.592574   15109 start.go:364] duration metric: took 28µs to acquireMachinesLock for "functional-296000"
I0610 04:17:15.592580   15109 start.go:96] Skipping create...Using existing machine configuration
I0610 04:17:15.592585   15109 fix.go:54] fixHost starting: 
I0610 04:17:15.592703   15109 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
W0610 04:17:15.592709   15109 fix.go:138] unexpected machine state, will restart: <nil>
I0610 04:17:15.600890   15109 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
I0610 04:17:15.604943   15109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:47:9f:b5:65:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/disk.qcow2
I0610 04:17:15.606941   15109 main.go:141] libmachine: STDOUT: 
I0610 04:17:15.606957   15109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 04:17:15.606988   15109 fix.go:56] duration metric: took 14.402583ms for fixHost
I0610 04:17:15.606991   15109 start.go:83] releasing machines lock for "functional-296000", held for 14.414416ms
W0610 04:17:15.606998   15109 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 04:17:15.607023   15109 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 04:17:15.607028   15109 start.go:728] Will try again in 5 seconds ...
I0610 04:17:20.609314   15109 start.go:360] acquireMachinesLock for functional-296000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 04:17:20.609754   15109 start.go:364] duration metric: took 362.833µs to acquireMachinesLock for "functional-296000"
I0610 04:17:20.609897   15109 start.go:96] Skipping create...Using existing machine configuration
I0610 04:17:20.609912   15109 fix.go:54] fixHost starting: 
I0610 04:17:20.610669   15109 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
W0610 04:17:20.610691   15109 fix.go:138] unexpected machine state, will restart: <nil>
I0610 04:17:20.620126   15109 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
I0610 04:17:20.624212   15109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:47:9f:b5:65:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/disk.qcow2
I0610 04:17:20.633688   15109 main.go:141] libmachine: STDOUT: 
I0610 04:17:20.633732   15109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 04:17:20.633816   15109 fix.go:56] duration metric: took 23.907291ms for fixHost
I0610 04:17:20.633828   15109 start.go:83] releasing machines lock for "functional-296000", held for 24.058584ms
W0610 04:17:20.633975   15109 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 04:17:20.641184   15109 out.go:177] 
W0610 04:17:20.645135   15109 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 04:17:20.645156   15109 out.go:239] * 
W0610 04:17:20.648277   15109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 04:17:20.656154   15109 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd706295239/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:15 PDT |                     |
|         | -p download-only-586000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| delete  | -p download-only-586000                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | -p download-only-791000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| delete  | -p download-only-791000                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| delete  | -p download-only-586000                                                  | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| delete  | -p download-only-791000                                                  | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| start   | --download-only -p                                                       | binary-mirror-722000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | binary-mirror-722000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52803                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-722000                                                  | binary-mirror-722000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| addons  | enable dashboard -p                                                      | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | addons-057000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | addons-057000                                                            |                      |         |         |                     |                     |
| start   | -p addons-057000 --wait=true                                             | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-057000                                                         | addons-057000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| start   | -p nospam-972000 -n=1 --memory=2250 --wait=false                         | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-972000 --log_dir                                                  | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-972000                                                         | nospam-972000        | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-296000 cache add                                              | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
| cache   | functional-296000 cache delete                                           | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | minikube-local-cache-test:functional-296000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
| ssh     | functional-296000 ssh sudo                                               | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-296000                                                        | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-296000 cache reload                                           | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
| ssh     | functional-296000 ssh                                                    | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT | 10 Jun 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-296000 kubectl --                                             | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | --context functional-296000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-296000                                                     | functional-296000    | jenkins | v1.33.1 | 10 Jun 24 04:17 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/10 04:17:15
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0610 04:17:15.507026   15109 out.go:291] Setting OutFile to fd 1 ...
I0610 04:17:15.507162   15109 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:17:15.507164   15109 out.go:304] Setting ErrFile to fd 2...
I0610 04:17:15.507165   15109 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:17:15.507272   15109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:17:15.508260   15109 out.go:298] Setting JSON to false
I0610 04:17:15.524725   15109 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8206,"bootTime":1718010029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0610 04:17:15.524786   15109 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0610 04:17:15.530040   15109 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0610 04:17:15.538991   15109 out.go:177]   - MINIKUBE_LOCATION=19052
I0610 04:17:15.539028   15109 notify.go:220] Checking for updates...
I0610 04:17:15.545892   15109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
I0610 04:17:15.549938   15109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0610 04:17:15.552882   15109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0610 04:17:15.555927   15109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
I0610 04:17:15.558973   15109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0610 04:17:15.562225   15109 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:17:15.562285   15109 driver.go:392] Setting default libvirt URI to qemu:///system
I0610 04:17:15.566948   15109 out.go:177] * Using the qemu2 driver based on existing profile
I0610 04:17:15.572927   15109 start.go:297] selected driver: qemu2
I0610 04:17:15.572932   15109 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0610 04:17:15.572985   15109 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0610 04:17:15.575285   15109 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0610 04:17:15.575305   15109 cni.go:84] Creating CNI manager for ""
I0610 04:17:15.575312   15109 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0610 04:17:15.575355   15109 start.go:340] cluster config:
{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0610 04:17:15.579892   15109 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0610 04:17:15.588005   15109 out.go:177] * Starting "functional-296000" primary control-plane node in "functional-296000" cluster
I0610 04:17:15.591937   15109 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0610 04:17:15.591951   15109 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0610 04:17:15.591959   15109 cache.go:56] Caching tarball of preloaded images
I0610 04:17:15.592015   15109 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0610 04:17:15.592018   15109 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0610 04:17:15.592083   15109 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/functional-296000/config.json ...
I0610 04:17:15.592541   15109 start.go:360] acquireMachinesLock for functional-296000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 04:17:15.592574   15109 start.go:364] duration metric: took 28µs to acquireMachinesLock for "functional-296000"
I0610 04:17:15.592580   15109 start.go:96] Skipping create...Using existing machine configuration
I0610 04:17:15.592585   15109 fix.go:54] fixHost starting: 
I0610 04:17:15.592703   15109 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
W0610 04:17:15.592709   15109 fix.go:138] unexpected machine state, will restart: <nil>
I0610 04:17:15.600890   15109 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
I0610 04:17:15.604943   15109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:47:9f:b5:65:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/disk.qcow2
I0610 04:17:15.606941   15109 main.go:141] libmachine: STDOUT: 
I0610 04:17:15.606957   15109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 04:17:15.606988   15109 fix.go:56] duration metric: took 14.402583ms for fixHost
I0610 04:17:15.606991   15109 start.go:83] releasing machines lock for "functional-296000", held for 14.414416ms
W0610 04:17:15.606998   15109 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 04:17:15.607023   15109 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 04:17:15.607028   15109 start.go:728] Will try again in 5 seconds ...
I0610 04:17:20.609314   15109 start.go:360] acquireMachinesLock for functional-296000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 04:17:20.609754   15109 start.go:364] duration metric: took 362.833µs to acquireMachinesLock for "functional-296000"
I0610 04:17:20.609897   15109 start.go:96] Skipping create...Using existing machine configuration
I0610 04:17:20.609912   15109 fix.go:54] fixHost starting: 
I0610 04:17:20.610669   15109 fix.go:112] recreateIfNeeded on functional-296000: state=Stopped err=<nil>
W0610 04:17:20.610691   15109 fix.go:138] unexpected machine state, will restart: <nil>
I0610 04:17:20.620126   15109 out.go:177] * Restarting existing qemu2 VM for "functional-296000" ...
I0610 04:17:20.624212   15109 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:47:9f:b5:65:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/functional-296000/disk.qcow2
I0610 04:17:20.633688   15109 main.go:141] libmachine: STDOUT: 
I0610 04:17:20.633732   15109 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 04:17:20.633816   15109 fix.go:56] duration metric: took 23.907291ms for fixHost
I0610 04:17:20.633828   15109 start.go:83] releasing machines lock for "functional-296000", held for 24.058584ms
W0610 04:17:20.633975   15109 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 04:17:20.641184   15109 out.go:177] 
W0610 04:17:20.645135   15109 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 04:17:20.645156   15109 out.go:239] * 
W0610 04:17:20.648277   15109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 04:17:20.656154   15109 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-296000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-296000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.585041ms)

                                                
                                                
** stderr ** 
	error: context "functional-296000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-296000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-296000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-296000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-296000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-296000 --alsologtostderr -v=1] stderr:
I0610 04:17:55.248975   15316 out.go:291] Setting OutFile to fd 1 ...
I0610 04:17:55.249376   15316 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:17:55.249380   15316 out.go:304] Setting ErrFile to fd 2...
I0610 04:17:55.249382   15316 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:17:55.249555   15316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:17:55.249813   15316 mustload.go:65] Loading cluster: functional-296000
I0610 04:17:55.250000   15316 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:17:55.254418   15316 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
I0610 04:17:55.258462   15316 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (41.234292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 status: exit status 7 (73.889667ms)

                                                
                                                
-- stdout --
	functional-296000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-296000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.900959ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-296000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 status -o json: exit status 7 (29.642125ms)

                                                
                                                
-- stdout --
	{"Name":"functional-296000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-296000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.282834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-296000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-296000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.028584ms)

                                                
                                                
** stderr ** 
	error: context "functional-296000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-296000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-296000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-296000 describe po hello-node-connect: exit status 1 (26.270208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-296000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-296000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-296000 logs -l app=hello-node-connect: exit status 1 (25.940958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-296000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-296000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-296000 describe svc hello-node-connect: exit status 1 (26.171125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-296000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.277875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-296000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (35.592208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "echo hello": exit status 83 (46.661625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n"*. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "cat /etc/hostname": exit status 83 (41.689625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-296000"- but got *"* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n"*. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (39.019083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (52.634459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.100541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-296000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-296000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cp functional-296000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd421511507/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 cp functional-296000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd421511507/001/cp-test.txt: exit status 83 (39.712917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 cp functional-296000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd421511507/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.997459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd421511507/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (43.60925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (52.841083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-296000 ssh -n functional-296000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-296000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-296000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14783/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/test/nested/copy/14783/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/test/nested/copy/14783/hosts": exit status 83 (40.707167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/test/nested/copy/14783/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-296000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-296000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.071166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14783.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/14783.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/14783.pem": exit status 83 (43.400416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/14783.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /etc/ssl/certs/14783.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/14783.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14783.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /usr/share/ca-certificates/14783.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /usr/share/ca-certificates/14783.pem": exit status 83 (40.844625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/14783.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /usr/share/ca-certificates/14783.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/14783.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (41.486042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/147832.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/147832.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/147832.pem": exit status 83 (38.906291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/147832.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /etc/ssl/certs/147832.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/147832.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/147832.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /usr/share/ca-certificates/147832.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /usr/share/ca-certificates/147832.pem": exit status 83 (43.542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/147832.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /usr/share/ca-certificates/147832.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/147832.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (40.6995ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-296000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-296000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (29.469125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-296000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-296000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.600541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-296000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-296000 -n functional-296000: exit status 7 (30.425167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo systemctl is-active crio": exit status 83 (40.2095ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0610 04:17:21.294199   15161 out.go:291] Setting OutFile to fd 1 ...
I0610 04:17:21.294358   15161 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:17:21.294363   15161 out.go:304] Setting ErrFile to fd 2...
I0610 04:17:21.294366   15161 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:17:21.294528   15161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:17:21.294770   15161 mustload.go:65] Loading cluster: functional-296000
I0610 04:17:21.294997   15161 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:17:21.298393   15161 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
I0610 04:17:21.304434   15161 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
stdout: * The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 15160: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-296000": client config: context "functional-296000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (93.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-296000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-296000 get svc nginx-svc: exit status 1 (70.421083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-296000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-296000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (93.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-296000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-296000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.750875ms)

                                                
                                                
** stderr ** 
	error: context "functional-296000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-296000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service list: exit status 83 (43.523875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-296000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service list -o json: exit status 83 (42.68325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-296000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service --namespace=default --https --url hello-node: exit status 83 (42.005583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-296000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service hello-node --url --format={{.IP}}: exit status 83 (45.19125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-296000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 service hello-node --url: exit status 83 (42.335292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-296000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:1565: failed to parse "* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"": parse "* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 version -o=json --components: exit status 83 (40.793584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-296000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-296000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-296000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image ls --format short --alsologtostderr:
I0610 04:18:05.690042   15453 out.go:291] Setting OutFile to fd 1 ...
I0610 04:18:05.690191   15453 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.690194   15453 out.go:304] Setting ErrFile to fd 2...
I0610 04:18:05.690197   15453 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.690328   15453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:18:05.690729   15453 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:18:05.690792   15453 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-296000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image ls --format table --alsologtostderr:
I0610 04:18:05.911755   15465 out.go:291] Setting OutFile to fd 1 ...
I0610 04:18:05.911896   15465 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.911900   15465 out.go:304] Setting ErrFile to fd 2...
I0610 04:18:05.911902   15465 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.912053   15465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:18:05.912444   15465 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:18:05.912512   15465 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-296000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image ls --format json --alsologtostderr:
I0610 04:18:05.876043   15463 out.go:291] Setting OutFile to fd 1 ...
I0610 04:18:05.876191   15463 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.876194   15463 out.go:304] Setting ErrFile to fd 2...
I0610 04:18:05.876196   15463 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.876319   15463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:18:05.876754   15463 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:18:05.876824   15463 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-296000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image ls --format yaml --alsologtostderr:
I0610 04:18:05.724913   15455 out.go:291] Setting OutFile to fd 1 ...
I0610 04:18:05.725078   15455 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.725081   15455 out.go:304] Setting ErrFile to fd 2...
I0610 04:18:05.725083   15455 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.725219   15455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:18:05.725601   15455 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:18:05.725663   15455 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh pgrep buildkitd: exit status 83 (42.857208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image build -t localhost/my-image:functional-296000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-296000 image build -t localhost/my-image:functional-296000 testdata/build --alsologtostderr:
I0610 04:18:05.802797   15459 out.go:291] Setting OutFile to fd 1 ...
I0610 04:18:05.803719   15459 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.803723   15459 out.go:304] Setting ErrFile to fd 2...
I0610 04:18:05.803725   15459 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:18:05.804156   15459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:18:05.804836   15459 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:18:05.805304   15459 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:18:05.805529   15459 build_images.go:133] succeeded building to: 
I0610 04:18:05.805532   15459 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "localhost/my-image:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr: (1.432386s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr: (1.363886916s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.221436709s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-296000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 image load --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr: (1.222937208s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image save gcr.io/google-containers/addon-resizer:functional-296000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-296000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-296000 docker-env) && out/minikube-darwin-arm64 status -p functional-296000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-296000 docker-env) && out/minikube-darwin-arm64 status -p functional-296000": exit status 1 (45.289333ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2: exit status 83 (41.336458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:18:05.943995   15467 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:18:05.944950   15467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:18:05.944975   15467 out.go:304] Setting ErrFile to fd 2...
	I0610 04:18:05.945043   15467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:18:05.945181   15467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:18:05.945404   15467 mustload.go:65] Loading cluster: functional-296000
	I0610 04:18:05.945599   15467 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:18:05.949276   15467 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
	I0610 04:18:05.953033   15467 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2: exit status 83 (42.469708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:18:06.029891   15471 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:18:06.030050   15471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:18:06.030053   15471 out.go:304] Setting ErrFile to fd 2...
	I0610 04:18:06.030055   15471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:18:06.030192   15471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:18:06.030412   15471 mustload.go:65] Loading cluster: functional-296000
	I0610 04:18:06.030613   15471 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:18:06.035029   15471 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
	I0610 04:18:06.038875   15471 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2: exit status 83 (42.60925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:18:05.987451   15469 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:18:05.987603   15469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:18:05.987606   15469 out.go:304] Setting ErrFile to fd 2...
	I0610 04:18:05.987608   15469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:18:05.987746   15469 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:18:05.987998   15469 mustload.go:65] Loading cluster: functional-296000
	I0610 04:18:05.988178   15469 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:18:05.991997   15469 out.go:177] * The control-plane node functional-296000 host is not running: state=Stopped
	I0610 04:18:05.996014   15469 out.go:177]   To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-296000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-296000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.026132833s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-459000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-459000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.047355708s)

                                                
                                                
-- stdout --
	* [ha-459000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-459000" primary control-plane node in "ha-459000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-459000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:19:58.480936   15531 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:19:58.481071   15531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:19:58.481075   15531 out.go:304] Setting ErrFile to fd 2...
	I0610 04:19:58.481078   15531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:19:58.481200   15531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:19:58.482265   15531 out.go:298] Setting JSON to false
	I0610 04:19:58.498471   15531 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8369,"bootTime":1718010029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:19:58.498535   15531 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:19:58.504282   15531 out.go:177] * [ha-459000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:19:58.513473   15531 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:19:58.513525   15531 notify.go:220] Checking for updates...
	I0610 04:19:58.519454   15531 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:19:58.522415   15531 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:19:58.525415   15531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:19:58.528449   15531 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:19:58.531453   15531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:19:58.532964   15531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:19:58.537404   15531 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:19:58.544285   15531 start.go:297] selected driver: qemu2
	I0610 04:19:58.544292   15531 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:19:58.544299   15531 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:19:58.546458   15531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:19:58.549404   15531 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:19:58.552550   15531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:19:58.552593   15531 cni.go:84] Creating CNI manager for ""
	I0610 04:19:58.552598   15531 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 04:19:58.552605   15531 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 04:19:58.552637   15531 start.go:340] cluster config:
	{Name:ha-459000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:19:58.556992   15531 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:19:58.564422   15531 out.go:177] * Starting "ha-459000" primary control-plane node in "ha-459000" cluster
	I0610 04:19:58.568523   15531 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:19:58.568539   15531 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:19:58.568548   15531 cache.go:56] Caching tarball of preloaded images
	I0610 04:19:58.568610   15531 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:19:58.568616   15531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:19:58.568851   15531 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/ha-459000/config.json ...
	I0610 04:19:58.568863   15531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/ha-459000/config.json: {Name:mk34d572c0398bfb165e45cbae6082e2c179b756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:19:58.569241   15531 start.go:360] acquireMachinesLock for ha-459000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:19:58.569277   15531 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "ha-459000"
	I0610 04:19:58.569288   15531 start.go:93] Provisioning new machine with config: &{Name:ha-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:19:58.569315   15531 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:19:58.574451   15531 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:19:58.592865   15531 start.go:159] libmachine.API.Create for "ha-459000" (driver="qemu2")
	I0610 04:19:58.592895   15531 client.go:168] LocalClient.Create starting
	I0610 04:19:58.592950   15531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:19:58.592981   15531 main.go:141] libmachine: Decoding PEM data...
	I0610 04:19:58.592997   15531 main.go:141] libmachine: Parsing certificate...
	I0610 04:19:58.593043   15531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:19:58.593066   15531 main.go:141] libmachine: Decoding PEM data...
	I0610 04:19:58.593076   15531 main.go:141] libmachine: Parsing certificate...
	I0610 04:19:58.593472   15531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:19:58.775653   15531 main.go:141] libmachine: Creating SSH key...
	I0610 04:19:59.002491   15531 main.go:141] libmachine: Creating Disk image...
	I0610 04:19:59.002503   15531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:19:59.002722   15531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:19:59.015830   15531 main.go:141] libmachine: STDOUT: 
	I0610 04:19:59.015847   15531 main.go:141] libmachine: STDERR: 
	I0610 04:19:59.015918   15531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2 +20000M
	I0610 04:19:59.026816   15531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:19:59.026830   15531 main.go:141] libmachine: STDERR: 
	I0610 04:19:59.026849   15531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:19:59.026854   15531 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:19:59.026879   15531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:b8:62:98:27:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:19:59.028590   15531 main.go:141] libmachine: STDOUT: 
	I0610 04:19:59.028602   15531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:19:59.028621   15531 client.go:171] duration metric: took 435.717625ms to LocalClient.Create
	I0610 04:20:01.030825   15531 start.go:128] duration metric: took 2.461469958s to createHost
	I0610 04:20:01.030914   15531 start.go:83] releasing machines lock for "ha-459000", held for 2.461585958s
	W0610 04:20:01.030986   15531 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:20:01.038390   15531 out.go:177] * Deleting "ha-459000" in qemu2 ...
	W0610 04:20:01.065624   15531 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:20:01.065649   15531 start.go:728] Will try again in 5 seconds ...
	I0610 04:20:06.067945   15531 start.go:360] acquireMachinesLock for ha-459000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:20:06.068411   15531 start.go:364] duration metric: took 346.084µs to acquireMachinesLock for "ha-459000"
	I0610 04:20:06.068577   15531 start.go:93] Provisioning new machine with config: &{Name:ha-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:20:06.068919   15531 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:20:06.077722   15531 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:20:06.125927   15531 start.go:159] libmachine.API.Create for "ha-459000" (driver="qemu2")
	I0610 04:20:06.125985   15531 client.go:168] LocalClient.Create starting
	I0610 04:20:06.126101   15531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:20:06.126174   15531 main.go:141] libmachine: Decoding PEM data...
	I0610 04:20:06.126192   15531 main.go:141] libmachine: Parsing certificate...
	I0610 04:20:06.126255   15531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:20:06.126300   15531 main.go:141] libmachine: Decoding PEM data...
	I0610 04:20:06.126312   15531 main.go:141] libmachine: Parsing certificate...
	I0610 04:20:06.127239   15531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:20:06.283223   15531 main.go:141] libmachine: Creating SSH key...
	I0610 04:20:06.421771   15531 main.go:141] libmachine: Creating Disk image...
	I0610 04:20:06.421777   15531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:20:06.421999   15531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:20:06.434730   15531 main.go:141] libmachine: STDOUT: 
	I0610 04:20:06.434757   15531 main.go:141] libmachine: STDERR: 
	I0610 04:20:06.434809   15531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2 +20000M
	I0610 04:20:06.445868   15531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:20:06.445934   15531 main.go:141] libmachine: STDERR: 
	I0610 04:20:06.445947   15531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:20:06.445952   15531 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:20:06.445976   15531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:cf:dc:71:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:20:06.447695   15531 main.go:141] libmachine: STDOUT: 
	I0610 04:20:06.447713   15531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:20:06.447729   15531 client.go:171] duration metric: took 321.736333ms to LocalClient.Create
	I0610 04:20:08.450042   15531 start.go:128] duration metric: took 2.381020875s to createHost
	I0610 04:20:08.450146   15531 start.go:83] releasing machines lock for "ha-459000", held for 2.381670458s
	W0610 04:20:08.450566   15531 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-459000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-459000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:20:08.465135   15531 out.go:177] 
	W0610 04:20:08.468381   15531 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:20:08.468424   15531 out.go:239] * 
	* 
	W0610 04:20:08.470803   15531 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:20:08.486145   15531 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-459000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (69.6615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (91.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.315125ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-459000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- rollout status deployment/busybox: exit status 1 (56.669417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.86625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.94725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.872375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.666958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.230792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.584083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.274083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.3365ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.295708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.932334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.598666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.471625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.337792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.251541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (29.311209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (91.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-459000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.4045ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-459000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (29.542666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-459000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-459000 -v=7 --alsologtostderr: exit status 83 (41.310834ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-459000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-459000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:40.025411   15638 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:40.026004   15638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.026010   15638 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:40.026012   15638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.026206   15638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:40.026429   15638 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:40.026597   15638 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:40.029799   15638 out.go:177] * The control-plane node ha-459000 host is not running: state=Stopped
	I0610 04:21:40.033589   15638 out.go:177]   To start a cluster, run: "minikube start -p ha-459000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-459000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (29.329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-459000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-459000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.456666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-459000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-459000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-459000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (30.345875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-459000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-459000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (31.053792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status --output json -v=7 --alsologtostderr: exit status 7 (29.903375ms)

                                                
                                                
-- stdout --
	{"Name":"ha-459000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:40.255459   15651 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:40.255604   15651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.255607   15651 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:40.255609   15651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.255756   15651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:40.255881   15651 out.go:298] Setting JSON to true
	I0610 04:21:40.255896   15651 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:40.255945   15651 notify.go:220] Checking for updates...
	I0610 04:21:40.256088   15651 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:40.256095   15651 status.go:255] checking status of ha-459000 ...
	I0610 04:21:40.256312   15651 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:21:40.256315   15651 status.go:343] host is not running, skipping remaining checks
	I0610 04:21:40.256318   15651 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-459000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (29.415125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.435ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:40.315080   15655 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:40.315618   15655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.315635   15655 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:40.315639   15655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.315795   15655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:40.316019   15655 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:40.316225   15655 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:40.320837   15655 out.go:177] 
	W0610 04:21:40.324783   15655 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0610 04:21:40.324788   15655 out.go:239] * 
	* 
	W0610 04:21:40.327245   15655 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:21:40.330757   15655 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-459000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (30.431708ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:40.363482   15657 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:40.363645   15657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.363648   15657 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:40.363650   15657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.363774   15657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:40.363899   15657 out.go:298] Setting JSON to false
	I0610 04:21:40.363910   15657 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:40.363967   15657 notify.go:220] Checking for updates...
	I0610 04:21:40.364097   15657 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:40.364103   15657 status.go:255] checking status of ha-459000 ...
	I0610 04:21:40.364329   15657 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:21:40.364332   15657 status.go:343] host is not running, skipping remaining checks
	I0610 04:21:40.364334   15657 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr": ha-459000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr": ha-459000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr": ha-459000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr": ha-459000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (30.228667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-459000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (29.687708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 node start m02 -v=7 --alsologtostderr: exit status 85 (49.573958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:40.525872   15667 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:40.526503   15667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.526507   15667 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:40.526509   15667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.526717   15667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:40.526934   15667 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:40.527147   15667 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:40.531031   15667 out.go:177] 
	W0610 04:21:40.535083   15667 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0610 04:21:40.535087   15667 out.go:239] * 
	* 
	W0610 04:21:40.537307   15667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:21:40.541953   15667 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0610 04:21:40.525872   15667 out.go:291] Setting OutFile to fd 1 ...
I0610 04:21:40.526503   15667 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:21:40.526507   15667 out.go:304] Setting ErrFile to fd 2...
I0610 04:21:40.526509   15667 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:21:40.526717   15667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:21:40.526934   15667 mustload.go:65] Loading cluster: ha-459000
I0610 04:21:40.527147   15667 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:21:40.531031   15667 out.go:177] 
W0610 04:21:40.535083   15667 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0610 04:21:40.535087   15667 out.go:239] * 
* 
W0610 04:21:40.537307   15667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 04:21:40.541953   15667 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-459000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (29.864542ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:40.575205   15669 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:40.575342   15669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.575347   15669 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:40.575350   15669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:40.575472   15669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:40.575603   15669 out.go:298] Setting JSON to false
	I0610 04:21:40.575618   15669 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:40.575660   15669 notify.go:220] Checking for updates...
	I0610 04:21:40.575803   15669 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:40.575814   15669 status.go:255] checking status of ha-459000 ...
	I0610 04:21:40.576003   15669 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:21:40.576006   15669 status.go:343] host is not running, skipping remaining checks
	I0610 04:21:40.576008   15669 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (73.262042ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:41.894656   15671 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:41.894844   15671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:41.894848   15671 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:41.894850   15671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:41.895013   15671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:41.895166   15671 out.go:298] Setting JSON to false
	I0610 04:21:41.895179   15671 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:41.895207   15671 notify.go:220] Checking for updates...
	I0610 04:21:41.895465   15671 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:41.895472   15671 status.go:255] checking status of ha-459000 ...
	I0610 04:21:41.895776   15671 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:21:41.895781   15671 status.go:343] host is not running, skipping remaining checks
	I0610 04:21:41.895784   15671 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (76.285042ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:44.019882   15673 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:44.020093   15673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:44.020097   15673 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:44.020101   15673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:44.020285   15673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:44.020456   15673 out.go:298] Setting JSON to false
	I0610 04:21:44.020472   15673 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:44.020519   15673 notify.go:220] Checking for updates...
	I0610 04:21:44.020766   15673 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:44.020775   15673 status.go:255] checking status of ha-459000 ...
	I0610 04:21:44.021060   15673 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:21:44.021065   15673 status.go:343] host is not running, skipping remaining checks
	I0610 04:21:44.021068   15673 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (74.747584ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:46.312993   15675 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:46.313196   15675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:46.313200   15675 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:46.313203   15675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:46.313379   15675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:46.313544   15675 out.go:298] Setting JSON to false
	I0610 04:21:46.313557   15675 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:46.313598   15675 notify.go:220] Checking for updates...
	I0610 04:21:46.313816   15675 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:46.313824   15675 status.go:255] checking status of ha-459000 ...
	I0610 04:21:46.314112   15675 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:21:46.314117   15675 status.go:343] host is not running, skipping remaining checks
	I0610 04:21:46.314120   15675 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (73.689667ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:49.350742   15682 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:49.350898   15682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:49.350903   15682 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:49.350906   15682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:49.351101   15682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:49.351257   15682 out.go:298] Setting JSON to false
	I0610 04:21:49.351269   15682 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:49.351313   15682 notify.go:220] Checking for updates...
	I0610 04:21:49.351516   15682 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:49.351524   15682 status.go:255] checking status of ha-459000 ...
	I0610 04:21:49.351795   15682 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:21:49.351800   15682 status.go:343] host is not running, skipping remaining checks
	I0610 04:21:49.351803   15682 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (74.376542ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:21:53.491290   15684 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:21:53.491522   15684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:53.491526   15684 out.go:304] Setting ErrFile to fd 2...
	I0610 04:21:53.491529   15684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:21:53.491699   15684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:21:53.491853   15684 out.go:298] Setting JSON to false
	I0610 04:21:53.491866   15684 mustload.go:65] Loading cluster: ha-459000
	I0610 04:21:53.491904   15684 notify.go:220] Checking for updates...
	I0610 04:21:53.492122   15684 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:21:53.492130   15684 status.go:255] checking status of ha-459000 ...
	I0610 04:21:53.492418   15684 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:21:53.492422   15684 status.go:343] host is not running, skipping remaining checks
	I0610 04:21:53.492425   15684 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (77.530208ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:22:01.253433   15686 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:22:01.253631   15686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:01.253635   15686 out.go:304] Setting ErrFile to fd 2...
	I0610 04:22:01.253638   15686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:01.253808   15686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:22:01.253952   15686 out.go:298] Setting JSON to false
	I0610 04:22:01.253964   15686 mustload.go:65] Loading cluster: ha-459000
	I0610 04:22:01.253997   15686 notify.go:220] Checking for updates...
	I0610 04:22:01.254200   15686 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:22:01.254208   15686 status.go:255] checking status of ha-459000 ...
	I0610 04:22:01.254507   15686 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:22:01.254512   15686 status.go:343] host is not running, skipping remaining checks
	I0610 04:22:01.254515   15686 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (73.872458ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:22:17.305815   15694 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:22:17.306036   15694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:17.306040   15694 out.go:304] Setting ErrFile to fd 2...
	I0610 04:22:17.306044   15694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:17.306219   15694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:22:17.306361   15694 out.go:298] Setting JSON to false
	I0610 04:22:17.306374   15694 mustload.go:65] Loading cluster: ha-459000
	I0610 04:22:17.306411   15694 notify.go:220] Checking for updates...
	I0610 04:22:17.306650   15694 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:22:17.306661   15694 status.go:255] checking status of ha-459000 ...
	I0610 04:22:17.306940   15694 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:22:17.306945   15694 status.go:343] host is not running, skipping remaining checks
	I0610 04:22:17.306948   15694 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (33.737709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (36.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-459000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-459000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (29.966292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-459000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-459000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-459000 -v=7 --alsologtostderr: (2.879146375s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-459000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-459000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.232109458s)

                                                
                                                
-- stdout --
	* [ha-459000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-459000" primary control-plane node in "ha-459000" cluster
	* Restarting existing qemu2 VM for "ha-459000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-459000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:22:20.416357   15724 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:22:20.416502   15724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:20.416507   15724 out.go:304] Setting ErrFile to fd 2...
	I0610 04:22:20.416510   15724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:20.416679   15724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:22:20.417889   15724 out.go:298] Setting JSON to false
	I0610 04:22:20.437264   15724 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8511,"bootTime":1718010029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:22:20.437327   15724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:22:20.441914   15724 out.go:177] * [ha-459000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:22:20.449828   15724 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:22:20.453835   15724 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:22:20.449875   15724 notify.go:220] Checking for updates...
	I0610 04:22:20.456868   15724 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:22:20.459765   15724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:22:20.462822   15724 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:22:20.465879   15724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:22:20.469172   15724 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:22:20.469239   15724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:22:20.473809   15724 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:22:20.480877   15724 start.go:297] selected driver: qemu2
	I0610 04:22:20.480884   15724 start.go:901] validating driver "qemu2" against &{Name:ha-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:22:20.480976   15724 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:22:20.483430   15724 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:22:20.483474   15724 cni.go:84] Creating CNI manager for ""
	I0610 04:22:20.483480   15724 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 04:22:20.483535   15724 start.go:340] cluster config:
	{Name:ha-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-459000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:22:20.488225   15724 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:22:20.495846   15724 out.go:177] * Starting "ha-459000" primary control-plane node in "ha-459000" cluster
	I0610 04:22:20.499819   15724 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:22:20.499835   15724 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:22:20.499846   15724 cache.go:56] Caching tarball of preloaded images
	I0610 04:22:20.499915   15724 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:22:20.499921   15724 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:22:20.499990   15724 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/ha-459000/config.json ...
	I0610 04:22:20.500457   15724 start.go:360] acquireMachinesLock for ha-459000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:22:20.500494   15724 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "ha-459000"
	I0610 04:22:20.500503   15724 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:22:20.500510   15724 fix.go:54] fixHost starting: 
	I0610 04:22:20.500632   15724 fix.go:112] recreateIfNeeded on ha-459000: state=Stopped err=<nil>
	W0610 04:22:20.500641   15724 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:22:20.503915   15724 out.go:177] * Restarting existing qemu2 VM for "ha-459000" ...
	I0610 04:22:20.511860   15724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:cf:dc:71:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:22:20.514056   15724 main.go:141] libmachine: STDOUT: 
	I0610 04:22:20.514076   15724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:22:20.514107   15724 fix.go:56] duration metric: took 13.595042ms for fixHost
	I0610 04:22:20.514112   15724 start.go:83] releasing machines lock for "ha-459000", held for 13.6135ms
	W0610 04:22:20.514119   15724 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:22:20.514156   15724 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:22:20.514161   15724 start.go:728] Will try again in 5 seconds ...
	I0610 04:22:25.516427   15724 start.go:360] acquireMachinesLock for ha-459000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:22:25.516871   15724 start.go:364] duration metric: took 316.584µs to acquireMachinesLock for "ha-459000"
	I0610 04:22:25.517002   15724 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:22:25.517026   15724 fix.go:54] fixHost starting: 
	I0610 04:22:25.517747   15724 fix.go:112] recreateIfNeeded on ha-459000: state=Stopped err=<nil>
	W0610 04:22:25.517780   15724 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:22:25.529710   15724 out.go:177] * Restarting existing qemu2 VM for "ha-459000" ...
	I0610 04:22:25.538431   15724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:cf:dc:71:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:22:25.548359   15724 main.go:141] libmachine: STDOUT: 
	I0610 04:22:25.548424   15724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:22:25.548525   15724 fix.go:56] duration metric: took 31.504459ms for fixHost
	I0610 04:22:25.548544   15724 start.go:83] releasing machines lock for "ha-459000", held for 31.652875ms
	W0610 04:22:25.548690   15724 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-459000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-459000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:22:25.556287   15724 out.go:177] 
	W0610 04:22:25.560053   15724 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:22:25.560101   15724 out.go:239] * 
	* 
	W0610 04:22:25.562457   15724 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:22:25.570184   15724 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-459000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-459000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (32.672417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.745625ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-459000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-459000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:22:25.714081   15736 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:22:25.714504   15736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:25.714508   15736 out.go:304] Setting ErrFile to fd 2...
	I0610 04:22:25.714510   15736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:25.714660   15736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:22:25.714873   15736 mustload.go:65] Loading cluster: ha-459000
	I0610 04:22:25.715057   15736 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:22:25.718681   15736 out.go:177] * The control-plane node ha-459000 host is not running: state=Stopped
	I0610 04:22:25.722573   15736 out.go:177]   To start a cluster, run: "minikube start -p ha-459000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-459000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (29.906709ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:22:25.755655   15738 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:22:25.755805   15738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:25.755808   15738 out.go:304] Setting ErrFile to fd 2...
	I0610 04:22:25.755810   15738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:25.755942   15738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:22:25.756057   15738 out.go:298] Setting JSON to false
	I0610 04:22:25.756076   15738 mustload.go:65] Loading cluster: ha-459000
	I0610 04:22:25.756118   15738 notify.go:220] Checking for updates...
	I0610 04:22:25.756264   15738 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:22:25.756271   15738 status.go:255] checking status of ha-459000 ...
	I0610 04:22:25.756487   15738 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:22:25.756490   15738 status.go:343] host is not running, skipping remaining checks
	I0610 04:22:25.756493   15738 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (30.030333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-459000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (29.983208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-459000 stop -v=7 --alsologtostderr: (2.121073167s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr: exit status 7 (69.093584ms)

                                                
                                                
-- stdout --
	ha-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:22:28.078177   15762 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:22:28.078400   15762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:28.078404   15762 out.go:304] Setting ErrFile to fd 2...
	I0610 04:22:28.078407   15762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:28.078574   15762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:22:28.078732   15762 out.go:298] Setting JSON to false
	I0610 04:22:28.078745   15762 mustload.go:65] Loading cluster: ha-459000
	I0610 04:22:28.078796   15762 notify.go:220] Checking for updates...
	I0610 04:22:28.079049   15762 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:22:28.079058   15762 status.go:255] checking status of ha-459000 ...
	I0610 04:22:28.079327   15762 status.go:330] ha-459000 host status = "Stopped" (err=<nil>)
	I0610 04:22:28.079332   15762 status.go:343] host is not running, skipping remaining checks
	I0610 04:22:28.079335   15762 status.go:257] ha-459000 status: &{Name:ha-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr": ha-459000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr": ha-459000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-459000 status -v=7 --alsologtostderr": ha-459000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (32.615542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-459000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-459000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.179682625s)

                                                
                                                
-- stdout --
	* [ha-459000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-459000" primary control-plane node in "ha-459000" cluster
	* Restarting existing qemu2 VM for "ha-459000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-459000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:22:28.140715   15766 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:22:28.140851   15766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:28.140854   15766 out.go:304] Setting ErrFile to fd 2...
	I0610 04:22:28.140856   15766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:28.140976   15766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:22:28.142207   15766 out.go:298] Setting JSON to false
	I0610 04:22:28.158654   15766 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8519,"bootTime":1718010029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:22:28.158750   15766 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:22:28.163280   15766 out.go:177] * [ha-459000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:22:28.170233   15766 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:22:28.170294   15766 notify.go:220] Checking for updates...
	I0610 04:22:28.174256   15766 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:22:28.178048   15766 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:22:28.181165   15766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:22:28.184271   15766 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:22:28.187239   15766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:22:28.190566   15766 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:22:28.190844   15766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:22:28.195218   15766 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:22:28.202193   15766 start.go:297] selected driver: qemu2
	I0610 04:22:28.202203   15766 start.go:901] validating driver "qemu2" against &{Name:ha-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:22:28.202276   15766 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:22:28.204644   15766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:22:28.204683   15766 cni.go:84] Creating CNI manager for ""
	I0610 04:22:28.204688   15766 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 04:22:28.204737   15766 start.go:340] cluster config:
	{Name:ha-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-459000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:22:28.209240   15766 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:22:28.217216   15766 out.go:177] * Starting "ha-459000" primary control-plane node in "ha-459000" cluster
	I0610 04:22:28.221296   15766 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:22:28.221311   15766 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:22:28.221320   15766 cache.go:56] Caching tarball of preloaded images
	I0610 04:22:28.221391   15766 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:22:28.221397   15766 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:22:28.221467   15766 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/ha-459000/config.json ...
	I0610 04:22:28.221930   15766 start.go:360] acquireMachinesLock for ha-459000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:22:28.221963   15766 start.go:364] duration metric: took 25.792µs to acquireMachinesLock for "ha-459000"
	I0610 04:22:28.221972   15766 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:22:28.221978   15766 fix.go:54] fixHost starting: 
	I0610 04:22:28.222111   15766 fix.go:112] recreateIfNeeded on ha-459000: state=Stopped err=<nil>
	W0610 04:22:28.222121   15766 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:22:28.230164   15766 out.go:177] * Restarting existing qemu2 VM for "ha-459000" ...
	I0610 04:22:28.234243   15766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:cf:dc:71:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:22:28.236575   15766 main.go:141] libmachine: STDOUT: 
	I0610 04:22:28.236598   15766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:22:28.236639   15766 fix.go:56] duration metric: took 14.659625ms for fixHost
	I0610 04:22:28.236644   15766 start.go:83] releasing machines lock for "ha-459000", held for 14.676209ms
	W0610 04:22:28.236652   15766 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:22:28.236690   15766 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:22:28.236695   15766 start.go:728] Will try again in 5 seconds ...
	I0610 04:22:33.238861   15766 start.go:360] acquireMachinesLock for ha-459000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:22:33.239323   15766 start.go:364] duration metric: took 362µs to acquireMachinesLock for "ha-459000"
	I0610 04:22:33.239445   15766 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:22:33.239463   15766 fix.go:54] fixHost starting: 
	I0610 04:22:33.240168   15766 fix.go:112] recreateIfNeeded on ha-459000: state=Stopped err=<nil>
	W0610 04:22:33.240192   15766 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:22:33.244689   15766 out.go:177] * Restarting existing qemu2 VM for "ha-459000" ...
	I0610 04:22:33.249819   15766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:cf:dc:71:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/ha-459000/disk.qcow2
	I0610 04:22:33.257889   15766 main.go:141] libmachine: STDOUT: 
	I0610 04:22:33.257952   15766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:22:33.258023   15766 fix.go:56] duration metric: took 18.561042ms for fixHost
	I0610 04:22:33.258040   15766 start.go:83] releasing machines lock for "ha-459000", held for 18.693959ms
	W0610 04:22:33.258236   15766 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-459000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-459000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:22:33.265594   15766 out.go:177] 
	W0610 04:22:33.269633   15766 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:22:33.269655   15766 out.go:239] * 
	* 
	W0610 04:22:33.271543   15766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:22:33.279589   15766 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-459000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (63.615666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-459000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (30.243291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-459000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-459000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.382625ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-459000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-459000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:22:33.488949   15785 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:22:33.489109   15785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:33.489112   15785 out.go:304] Setting ErrFile to fd 2...
	I0610 04:22:33.489115   15785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:22:33.489260   15785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:22:33.489502   15785 mustload.go:65] Loading cluster: ha-459000
	I0610 04:22:33.489672   15785 config.go:182] Loaded profile config "ha-459000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:22:33.493970   15785 out.go:177] * The control-plane node ha-459000 host is not running: state=Stopped
	I0610 04:22:33.498041   15785 out.go:177]   To start a cluster, run: "minikube start -p ha-459000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-459000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (30.278541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-459000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-459000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-459000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-459000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-459000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-459000 -n ha-459000: exit status 7 (30.012041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-459000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-407000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-407000 --driver=qemu2 : exit status 80 (9.929752458s)

                                                
                                                
-- stdout --
	* [image-407000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-407000" primary control-plane node in "image-407000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-407000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-407000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-407000 -n image-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-407000 -n image-407000: exit status 7 (68.120541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-068000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-068000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.91202375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb57e9b9-4bbe-4ada-922a-6994a3a5d80e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-068000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e1877bb-eac0-4f67-85d6-3dbb74cc7e00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19052"}}
	{"specversion":"1.0","id":"cf7775d7-8bd7-4367-b444-7e28b8a569ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig"}}
	{"specversion":"1.0","id":"ae0cab8f-7cd7-4916-b9b1-2aab59533f63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"34af0764-cf36-4c94-80cd-5672e949b602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"69e0c5f6-34be-454d-a9f5-ba513ff56e26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube"}}
	{"specversion":"1.0","id":"894f78ae-4212-4637-94dd-9d8ec5bf3c71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dd82e3cd-691a-432a-a282-bad6445b759d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"28cc8a13-6dbf-47f3-89ea-bacd19bdcb6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"82c65617-b826-410f-a4c0-733ef4373d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-068000\" primary control-plane node in \"json-output-068000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"80987fa5-526f-444c-8551-94f4d7328cd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f9465ee0-2e7e-4c07-ad08-73a4a2e36b6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-068000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"597594fa-af2d-406b-9718-ab70a432245b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0525f51b-4054-4472-9403-d64dec48c8b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"e6b64d77-b375-4ace-b9b7-a373d736ea9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-068000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f8c9dcc4-508c-47b4-85f5-d33662b2aaaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"a7f24224-3da9-4b36-8578-1c93a30384c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-068000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.91s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-068000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-068000 --output=json --user=testUser: exit status 83 (78.272459ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd42d87b-f37b-4c49-af2f-d72b38d9e237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-068000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"66ee0c6b-83b2-49fe-b76e-2389ffd5ed96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-068000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-068000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-068000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-068000 --output=json --user=testUser: exit status 83 (46.206292ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-068000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-068000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-068000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-068000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-932000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-932000 --driver=qemu2 : exit status 80 (10.017585125s)

                                                
                                                
-- stdout --
	* [first-932000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-932000" primary control-plane node in "first-932000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-932000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-932000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-932000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-06-10 04:23:07.501135 -0700 PDT m=+431.198206751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-933000 -n second-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-933000 -n second-933000: exit status 85 (78.20775ms)

                                                
                                                
-- stdout --
	* Profile "second-933000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-933000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-933000" host is not running, skipping log retrieval (state="* Profile \"second-933000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-933000\"")
helpers_test.go:175: Cleaning up "second-933000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-933000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-06-10 04:23:07.808179 -0700 PDT m=+431.505248376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-932000 -n first-932000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-932000 -n first-932000: exit status 7 (29.810458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-932000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-932000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-932000
--- FAIL: TestMinikubeProfile (10.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-217000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-217000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.085292083s)

                                                
                                                
-- stdout --
	* [mount-start-1-217000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-217000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-217000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-217000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-217000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-217000 -n mount-start-1-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-217000 -n mount-start-1-217000: exit status 7 (69.498334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-766000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-766000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.93976s)

                                                
                                                
-- stdout --
	* [multinode-766000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-766000" primary control-plane node in "multinode-766000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-766000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:23:18.452364   15961 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:23:18.452498   15961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:23:18.452504   15961 out.go:304] Setting ErrFile to fd 2...
	I0610 04:23:18.452506   15961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:23:18.452629   15961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:23:18.453695   15961 out.go:298] Setting JSON to false
	I0610 04:23:18.469676   15961 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8569,"bootTime":1718010029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:23:18.469737   15961 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:23:18.475853   15961 out.go:177] * [multinode-766000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:23:18.483914   15961 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:23:18.488836   15961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:23:18.483968   15961 notify.go:220] Checking for updates...
	I0610 04:23:18.495794   15961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:23:18.498818   15961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:23:18.501831   15961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:23:18.504861   15961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:23:18.507952   15961 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:23:18.511804   15961 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:23:18.518766   15961 start.go:297] selected driver: qemu2
	I0610 04:23:18.518770   15961 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:23:18.518776   15961 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:23:18.520878   15961 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:23:18.524809   15961 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:23:18.527957   15961 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:23:18.527984   15961 cni.go:84] Creating CNI manager for ""
	I0610 04:23:18.527989   15961 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 04:23:18.527993   15961 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 04:23:18.528026   15961 start.go:340] cluster config:
	{Name:multinode-766000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:23:18.532732   15961 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:23:18.540790   15961 out.go:177] * Starting "multinode-766000" primary control-plane node in "multinode-766000" cluster
	I0610 04:23:18.544813   15961 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:23:18.544846   15961 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:23:18.544858   15961 cache.go:56] Caching tarball of preloaded images
	I0610 04:23:18.544934   15961 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:23:18.544940   15961 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:23:18.545183   15961 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/multinode-766000/config.json ...
	I0610 04:23:18.545195   15961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/multinode-766000/config.json: {Name:mk9506e492f6127ba4bee730bf4c7a34de88bcad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:23:18.545443   15961 start.go:360] acquireMachinesLock for multinode-766000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:23:18.545480   15961 start.go:364] duration metric: took 30.708µs to acquireMachinesLock for "multinode-766000"
	I0610 04:23:18.545491   15961 start.go:93] Provisioning new machine with config: &{Name:multinode-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:23:18.545523   15961 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:23:18.551835   15961 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:23:18.570728   15961 start.go:159] libmachine.API.Create for "multinode-766000" (driver="qemu2")
	I0610 04:23:18.570754   15961 client.go:168] LocalClient.Create starting
	I0610 04:23:18.570816   15961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:23:18.570853   15961 main.go:141] libmachine: Decoding PEM data...
	I0610 04:23:18.570862   15961 main.go:141] libmachine: Parsing certificate...
	I0610 04:23:18.570914   15961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:23:18.570938   15961 main.go:141] libmachine: Decoding PEM data...
	I0610 04:23:18.570950   15961 main.go:141] libmachine: Parsing certificate...
	I0610 04:23:18.571389   15961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:23:18.717068   15961 main.go:141] libmachine: Creating SSH key...
	I0610 04:23:18.871802   15961 main.go:141] libmachine: Creating Disk image...
	I0610 04:23:18.871808   15961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:23:18.872006   15961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:23:18.884727   15961 main.go:141] libmachine: STDOUT: 
	I0610 04:23:18.884749   15961 main.go:141] libmachine: STDERR: 
	I0610 04:23:18.884792   15961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2 +20000M
	I0610 04:23:18.895723   15961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:23:18.895747   15961 main.go:141] libmachine: STDERR: 
	I0610 04:23:18.895760   15961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:23:18.895765   15961 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:23:18.895794   15961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:75:75:d4:32:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:23:18.897692   15961 main.go:141] libmachine: STDOUT: 
	I0610 04:23:18.897707   15961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:23:18.897728   15961 client.go:171] duration metric: took 326.96475ms to LocalClient.Create
	I0610 04:23:20.899998   15961 start.go:128] duration metric: took 2.354430333s to createHost
	I0610 04:23:20.900081   15961 start.go:83] releasing machines lock for "multinode-766000", held for 2.354574417s
	W0610 04:23:20.900217   15961 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:23:20.911449   15961 out.go:177] * Deleting "multinode-766000" in qemu2 ...
	W0610 04:23:20.947708   15961 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:23:20.947738   15961 start.go:728] Will try again in 5 seconds ...
	I0610 04:23:25.948536   15961 start.go:360] acquireMachinesLock for multinode-766000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:23:25.949150   15961 start.go:364] duration metric: took 504.625µs to acquireMachinesLock for "multinode-766000"
	I0610 04:23:25.949306   15961 start.go:93] Provisioning new machine with config: &{Name:multinode-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:23:25.949590   15961 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:23:25.965329   15961 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:23:26.015750   15961 start.go:159] libmachine.API.Create for "multinode-766000" (driver="qemu2")
	I0610 04:23:26.015796   15961 client.go:168] LocalClient.Create starting
	I0610 04:23:26.015913   15961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:23:26.015994   15961 main.go:141] libmachine: Decoding PEM data...
	I0610 04:23:26.016010   15961 main.go:141] libmachine: Parsing certificate...
	I0610 04:23:26.016078   15961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:23:26.016122   15961 main.go:141] libmachine: Decoding PEM data...
	I0610 04:23:26.016135   15961 main.go:141] libmachine: Parsing certificate...
	I0610 04:23:26.016646   15961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:23:26.177019   15961 main.go:141] libmachine: Creating SSH key...
	I0610 04:23:26.293080   15961 main.go:141] libmachine: Creating Disk image...
	I0610 04:23:26.293086   15961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:23:26.293260   15961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:23:26.305999   15961 main.go:141] libmachine: STDOUT: 
	I0610 04:23:26.306019   15961 main.go:141] libmachine: STDERR: 
	I0610 04:23:26.306073   15961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2 +20000M
	I0610 04:23:26.316954   15961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:23:26.316979   15961 main.go:141] libmachine: STDERR: 
	I0610 04:23:26.316993   15961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:23:26.316997   15961 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:23:26.317026   15961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:54:2b:02:dc:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:23:26.318767   15961 main.go:141] libmachine: STDOUT: 
	I0610 04:23:26.318781   15961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:23:26.318795   15961 client.go:171] duration metric: took 302.990209ms to LocalClient.Create
	I0610 04:23:28.320965   15961 start.go:128] duration metric: took 2.371329s to createHost
	I0610 04:23:28.321019   15961 start.go:83] releasing machines lock for "multinode-766000", held for 2.371821166s
	W0610 04:23:28.321428   15961 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:23:28.333984   15961 out.go:177] 
	W0610 04:23:28.338066   15961 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:23:28.338127   15961 out.go:239] * 
	* 
	W0610 04:23:28.340902   15961 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:23:28.350939   15961 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-766000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (67.191584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (114.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.309583ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-766000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- rollout status deployment/busybox: exit status 1 (56.391125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.156917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.67025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.598459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.380791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.168083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.677416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.085958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.669167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.330584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.27475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.262917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.817375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.677375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.454167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.002709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (30.82575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (114.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.966458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (30.514875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-766000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-766000 -v 3 --alsologtostderr: exit status 83 (44.753125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-766000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-766000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:23.494302   16061 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:23.494471   16061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.494474   16061 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:23.494477   16061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.494624   16061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:23.494863   16061 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:23.495055   16061 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:23.499646   16061 out.go:177] * The control-plane node multinode-766000 host is not running: state=Stopped
	I0610 04:25:23.502536   16061 out.go:177]   To start a cluster, run: "minikube start -p multinode-766000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-766000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (29.23225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-766000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-766000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.371167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-766000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-766000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-766000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (29.905292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-766000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-766000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-766000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"multinode-766000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (29.625125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status --output json --alsologtostderr: exit status 7 (29.352375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-766000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:23.727022   16074 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:23.727151   16074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.727154   16074 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:23.727157   16074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.727282   16074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:23.727388   16074 out.go:298] Setting JSON to true
	I0610 04:25:23.727398   16074 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:23.727450   16074 notify.go:220] Checking for updates...
	I0610 04:25:23.727593   16074 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:23.727600   16074 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:23.727790   16074 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:23.727794   16074 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:23.727796   16074 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-766000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (29.9025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 node stop m03: exit status 85 (47.309583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-766000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status: exit status 7 (30.457875ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr: exit status 7 (30.66475ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:23.865194   16084 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:23.865330   16084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.865333   16084 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:23.865336   16084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.865479   16084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:23.865604   16084 out.go:298] Setting JSON to false
	I0610 04:25:23.865615   16084 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:23.865911   16084 notify.go:220] Checking for updates...
	I0610 04:25:23.866639   16084 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:23.866655   16084 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:23.866859   16084 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:23.866864   16084 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:23.866866   16084 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr": multinode-766000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (30.697334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (56.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.631791ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:23.926620   16088 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:23.926996   16088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.927000   16088 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:23.927002   16088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.927168   16088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:23.927390   16088 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:23.927577   16088 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:23.930679   16088 out.go:177] 
	W0610 04:25:23.934666   16088 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0610 04:25:23.934671   16088 out.go:239] * 
	* 
	W0610 04:25:23.936929   16088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:25:23.940548   16088 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0610 04:25:23.926620   16088 out.go:291] Setting OutFile to fd 1 ...
I0610 04:25:23.926996   16088 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:25:23.927000   16088 out.go:304] Setting ErrFile to fd 2...
I0610 04:25:23.927002   16088 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 04:25:23.927168   16088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
I0610 04:25:23.927390   16088 mustload.go:65] Loading cluster: multinode-766000
I0610 04:25:23.927577   16088 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 04:25:23.930679   16088 out.go:177] 
W0610 04:25:23.934666   16088 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0610 04:25:23.934671   16088 out.go:239] * 
* 
W0610 04:25:23.936929   16088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 04:25:23.940548   16088 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-766000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (30.264708ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:23.973626   16090 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:23.973980   16090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.973985   16090 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:23.973988   16090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:23.974175   16090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:23.974325   16090 out.go:298] Setting JSON to false
	I0610 04:25:23.974338   16090 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:23.974466   16090 notify.go:220] Checking for updates...
	I0610 04:25:23.974827   16090 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:23.974836   16090 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:23.975017   16090 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:23.975021   16090 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:23.975023   16090 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (74.687625ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:25.369130   16092 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:25.369332   16092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:25.369337   16092 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:25.369340   16092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:25.369532   16092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:25.369703   16092 out.go:298] Setting JSON to false
	I0610 04:25:25.369716   16092 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:25.369756   16092 notify.go:220] Checking for updates...
	I0610 04:25:25.369996   16092 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:25.370009   16092 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:25.370297   16092 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:25.370303   16092 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:25.370306   16092 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (74.410125ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:26.393970   16096 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:26.394154   16096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:26.394158   16096 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:26.394161   16096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:26.394333   16096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:26.394494   16096 out.go:298] Setting JSON to false
	I0610 04:25:26.394507   16096 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:26.394539   16096 notify.go:220] Checking for updates...
	I0610 04:25:26.394784   16096 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:26.394797   16096 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:26.395065   16096 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:26.395070   16096 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:26.395073   16096 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (74.318375ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:29.629260   16098 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:29.629435   16098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:29.629439   16098 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:29.629442   16098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:29.629632   16098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:29.629783   16098 out.go:298] Setting JSON to false
	I0610 04:25:29.629795   16098 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:29.629835   16098 notify.go:220] Checking for updates...
	I0610 04:25:29.630066   16098 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:29.630074   16098 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:29.630349   16098 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:29.630354   16098 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:29.630357   16098 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (74.231333ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:33.779632   16100 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:33.779847   16100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:33.779851   16100 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:33.779854   16100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:33.780044   16100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:33.780197   16100 out.go:298] Setting JSON to false
	I0610 04:25:33.780209   16100 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:33.780245   16100 notify.go:220] Checking for updates...
	I0610 04:25:33.780468   16100 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:33.780477   16100 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:33.780737   16100 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:33.780742   16100 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:33.780745   16100 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (73.078541ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:39.439190   16102 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:39.439404   16102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:39.439407   16102 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:39.439410   16102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:39.439575   16102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:39.439739   16102 out.go:298] Setting JSON to false
	I0610 04:25:39.439753   16102 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:39.439795   16102 notify.go:220] Checking for updates...
	I0610 04:25:39.440042   16102 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:39.440055   16102 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:39.440338   16102 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:39.440342   16102 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:39.440345   16102 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (78.039833ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:45.871275   16106 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:45.871436   16106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:45.871441   16106 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:45.871444   16106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:45.871610   16106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:45.871759   16106 out.go:298] Setting JSON to false
	I0610 04:25:45.871772   16106 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:45.871810   16106 notify.go:220] Checking for updates...
	I0610 04:25:45.872022   16106 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:45.872031   16106 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:45.872296   16106 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:45.872301   16106 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:45.872304   16106 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (73.669792ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:25:54.478720   16110 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:25:54.478923   16110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:54.478930   16110 out.go:304] Setting ErrFile to fd 2...
	I0610 04:25:54.478933   16110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:25:54.479090   16110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:25:54.479245   16110 out.go:298] Setting JSON to false
	I0610 04:25:54.479257   16110 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:25:54.479290   16110 notify.go:220] Checking for updates...
	I0610 04:25:54.479503   16110 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:25:54.479511   16110 status.go:255] checking status of multinode-766000 ...
	I0610 04:25:54.479781   16110 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:25:54.479786   16110 status.go:343] host is not running, skipping remaining checks
	I0610 04:25:54.479789   16110 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr: exit status 7 (73.179625ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:26:20.014082   16119 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:26:20.014317   16119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:20.014321   16119 out.go:304] Setting ErrFile to fd 2...
	I0610 04:26:20.014325   16119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:20.014516   16119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:26:20.014679   16119 out.go:298] Setting JSON to false
	I0610 04:26:20.014693   16119 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:26:20.014735   16119 notify.go:220] Checking for updates...
	I0610 04:26:20.014991   16119 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:26:20.015000   16119 status.go:255] checking status of multinode-766000 ...
	I0610 04:26:20.015274   16119 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:26:20.015279   16119 status.go:343] host is not running, skipping remaining checks
	I0610 04:26:20.015282   16119 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-766000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (33.373958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (56.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-766000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-766000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-766000: (2.060714459s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224279167s)

                                                
                                                
-- stdout --
	* [multinode-766000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-766000" primary control-plane node in "multinode-766000" cluster
	* Restarting existing qemu2 VM for "multinode-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:26:22.207704   16137 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:26:22.208136   16137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:22.208142   16137 out.go:304] Setting ErrFile to fd 2...
	I0610 04:26:22.208146   16137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:22.208401   16137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:26:22.210120   16137 out.go:298] Setting JSON to false
	I0610 04:26:22.230167   16137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8753,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:26:22.230236   16137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:26:22.235118   16137 out.go:177] * [multinode-766000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:26:22.241116   16137 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:26:22.241148   16137 notify.go:220] Checking for updates...
	I0610 04:26:22.248015   16137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:26:22.251022   16137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:26:22.254043   16137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:26:22.255473   16137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:26:22.258070   16137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:26:22.261428   16137 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:26:22.261484   16137 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:26:22.265861   16137 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:26:22.273059   16137 start.go:297] selected driver: qemu2
	I0610 04:26:22.273065   16137 start.go:901] validating driver "qemu2" against &{Name:multinode-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:26:22.273124   16137 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:26:22.275436   16137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:26:22.275484   16137 cni.go:84] Creating CNI manager for ""
	I0610 04:26:22.275489   16137 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 04:26:22.275540   16137 start.go:340] cluster config:
	{Name:multinode-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-766000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:26:22.280217   16137 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:22.287009   16137 out.go:177] * Starting "multinode-766000" primary control-plane node in "multinode-766000" cluster
	I0610 04:26:22.291068   16137 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:26:22.291083   16137 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:26:22.291092   16137 cache.go:56] Caching tarball of preloaded images
	I0610 04:26:22.291160   16137 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:26:22.291167   16137 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:26:22.291234   16137 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/multinode-766000/config.json ...
	I0610 04:26:22.291675   16137 start.go:360] acquireMachinesLock for multinode-766000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:26:22.291709   16137 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "multinode-766000"
	I0610 04:26:22.291718   16137 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:26:22.291724   16137 fix.go:54] fixHost starting: 
	I0610 04:26:22.291858   16137 fix.go:112] recreateIfNeeded on multinode-766000: state=Stopped err=<nil>
	W0610 04:26:22.291866   16137 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:26:22.296074   16137 out.go:177] * Restarting existing qemu2 VM for "multinode-766000" ...
	I0610 04:26:22.304018   16137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:54:2b:02:dc:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:26:22.306143   16137 main.go:141] libmachine: STDOUT: 
	I0610 04:26:22.306170   16137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:26:22.306198   16137 fix.go:56] duration metric: took 14.473834ms for fixHost
	I0610 04:26:22.306203   16137 start.go:83] releasing machines lock for "multinode-766000", held for 14.4895ms
	W0610 04:26:22.306210   16137 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:26:22.306242   16137 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:26:22.306247   16137 start.go:728] Will try again in 5 seconds ...
	I0610 04:26:27.308497   16137 start.go:360] acquireMachinesLock for multinode-766000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:26:27.308879   16137 start.go:364] duration metric: took 281.292µs to acquireMachinesLock for "multinode-766000"
	I0610 04:26:27.308996   16137 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:26:27.309022   16137 fix.go:54] fixHost starting: 
	I0610 04:26:27.309761   16137 fix.go:112] recreateIfNeeded on multinode-766000: state=Stopped err=<nil>
	W0610 04:26:27.309793   16137 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:26:27.314348   16137 out.go:177] * Restarting existing qemu2 VM for "multinode-766000" ...
	I0610 04:26:27.322469   16137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:54:2b:02:dc:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:26:27.331889   16137 main.go:141] libmachine: STDOUT: 
	I0610 04:26:27.331953   16137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:26:27.332075   16137 fix.go:56] duration metric: took 23.055875ms for fixHost
	I0610 04:26:27.332095   16137 start.go:83] releasing machines lock for "multinode-766000", held for 23.195375ms
	W0610 04:26:27.332256   16137 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:26:27.340250   16137 out.go:177] 
	W0610 04:26:27.344271   16137 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:26:27.344314   16137 out.go:239] * 
	* 
	W0610 04:26:27.346476   16137 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:26:27.353294   16137 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-766000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-766000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (32.869583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 node delete m03: exit status 83 (41.376167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-766000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-766000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-766000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr: exit status 7 (30.288958ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:26:27.536104   16153 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:26:27.536480   16153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:27.536489   16153 out.go:304] Setting ErrFile to fd 2...
	I0610 04:26:27.536492   16153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:27.536694   16153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:26:27.536847   16153 out.go:298] Setting JSON to false
	I0610 04:26:27.536857   16153 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:26:27.536988   16153 notify.go:220] Checking for updates...
	I0610 04:26:27.537230   16153 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:26:27.537244   16153 status.go:255] checking status of multinode-766000 ...
	I0610 04:26:27.537445   16153 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:26:27.537449   16153 status.go:343] host is not running, skipping remaining checks
	I0610 04:26:27.537451   16153 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (29.890625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-766000 stop: (3.43391325s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status: exit status 7 (73.312583ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr: exit status 7 (33.8495ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:26:31.108347   16177 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:26:31.108489   16177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:31.108492   16177 out.go:304] Setting ErrFile to fd 2...
	I0610 04:26:31.108495   16177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:31.108636   16177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:26:31.108768   16177 out.go:298] Setting JSON to false
	I0610 04:26:31.108778   16177 mustload.go:65] Loading cluster: multinode-766000
	I0610 04:26:31.108835   16177 notify.go:220] Checking for updates...
	I0610 04:26:31.108981   16177 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:26:31.108988   16177 status.go:255] checking status of multinode-766000 ...
	I0610 04:26:31.109182   16177 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0610 04:26:31.109185   16177 status.go:343] host is not running, skipping remaining checks
	I0610 04:26:31.109188   16177 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr": multinode-766000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-766000 status --alsologtostderr": multinode-766000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (29.915375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184224666s)

                                                
                                                
-- stdout --
	* [multinode-766000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-766000" primary control-plane node in "multinode-766000" cluster
	* Restarting existing qemu2 VM for "multinode-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:26:31.167890   16181 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:26:31.168124   16181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:31.168131   16181 out.go:304] Setting ErrFile to fd 2...
	I0610 04:26:31.168133   16181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:31.168312   16181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:26:31.169624   16181 out.go:298] Setting JSON to false
	I0610 04:26:31.185829   16181 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8762,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:26:31.185890   16181 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:26:31.190269   16181 out.go:177] * [multinode-766000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:26:31.197928   16181 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:26:31.197976   16181 notify.go:220] Checking for updates...
	I0610 04:26:31.202015   16181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:26:31.204894   16181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:26:31.207967   16181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:26:31.211001   16181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:26:31.213905   16181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:26:31.217226   16181 config.go:182] Loaded profile config "multinode-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:26:31.217501   16181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:26:31.221939   16181 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:26:31.228944   16181 start.go:297] selected driver: qemu2
	I0610 04:26:31.228950   16181 start.go:901] validating driver "qemu2" against &{Name:multinode-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:26:31.229026   16181 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:26:31.231210   16181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:26:31.231254   16181 cni.go:84] Creating CNI manager for ""
	I0610 04:26:31.231260   16181 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 04:26:31.231304   16181 start.go:340] cluster config:
	{Name:multinode-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-766000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:26:31.235752   16181 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:31.242921   16181 out.go:177] * Starting "multinode-766000" primary control-plane node in "multinode-766000" cluster
	I0610 04:26:31.246955   16181 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:26:31.246970   16181 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:26:31.246980   16181 cache.go:56] Caching tarball of preloaded images
	I0610 04:26:31.247040   16181 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:26:31.247046   16181 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:26:31.247123   16181 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/multinode-766000/config.json ...
	I0610 04:26:31.247551   16181 start.go:360] acquireMachinesLock for multinode-766000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:26:31.247581   16181 start.go:364] duration metric: took 23.208µs to acquireMachinesLock for "multinode-766000"
	I0610 04:26:31.247589   16181 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:26:31.247621   16181 fix.go:54] fixHost starting: 
	I0610 04:26:31.247740   16181 fix.go:112] recreateIfNeeded on multinode-766000: state=Stopped err=<nil>
	W0610 04:26:31.247748   16181 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:26:31.255958   16181 out.go:177] * Restarting existing qemu2 VM for "multinode-766000" ...
	I0610 04:26:31.260002   16181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:54:2b:02:dc:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:26:31.262145   16181 main.go:141] libmachine: STDOUT: 
	I0610 04:26:31.262166   16181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:26:31.262194   16181 fix.go:56] duration metric: took 14.572875ms for fixHost
	I0610 04:26:31.262200   16181 start.go:83] releasing machines lock for "multinode-766000", held for 14.614416ms
	W0610 04:26:31.262206   16181 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:26:31.262244   16181 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:26:31.262249   16181 start.go:728] Will try again in 5 seconds ...
	I0610 04:26:36.264434   16181 start.go:360] acquireMachinesLock for multinode-766000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:26:36.264823   16181 start.go:364] duration metric: took 307.75µs to acquireMachinesLock for "multinode-766000"
	I0610 04:26:36.264952   16181 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:26:36.264971   16181 fix.go:54] fixHost starting: 
	I0610 04:26:36.265695   16181 fix.go:112] recreateIfNeeded on multinode-766000: state=Stopped err=<nil>
	W0610 04:26:36.265716   16181 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:26:36.270184   16181 out.go:177] * Restarting existing qemu2 VM for "multinode-766000" ...
	I0610 04:26:36.280284   16181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:54:2b:02:dc:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/multinode-766000/disk.qcow2
	I0610 04:26:36.288755   16181 main.go:141] libmachine: STDOUT: 
	I0610 04:26:36.288813   16181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:26:36.288899   16181 fix.go:56] duration metric: took 23.925583ms for fixHost
	I0610 04:26:36.288924   16181 start.go:83] releasing machines lock for "multinode-766000", held for 24.075166ms
	W0610 04:26:36.289176   16181 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:26:36.297173   16181 out.go:177] 
	W0610 04:26:36.301201   16181 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:26:36.301218   16181 out.go:239] * 
	* 
	W0610 04:26:36.302927   16181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:26:36.312153   16181 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (69.486416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-766000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-766000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-766000-m01 --driver=qemu2 : exit status 80 (11.038423s)

                                                
                                                
-- stdout --
	* [multinode-766000-m01] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-766000-m01" primary control-plane node in "multinode-766000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-766000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-766000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-766000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-766000-m02 --driver=qemu2 : exit status 80 (10.372313834s)

                                                
                                                
-- stdout --
	* [multinode-766000-m02] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-766000-m02" primary control-plane node in "multinode-766000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-766000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-766000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-766000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-766000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-766000: exit status 83 (85.900792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-766000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-766000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-766000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-766000 -n multinode-766000: exit status 7 (30.756959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (21.67s)

                                                
                                    
x
+
TestPreload (10.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.162053958s)

                                                
                                                
-- stdout --
	* [test-preload-558000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-558000" primary control-plane node in "test-preload-558000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:26:58.234149   16245 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:26:58.234293   16245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:58.234296   16245 out.go:304] Setting ErrFile to fd 2...
	I0610 04:26:58.234303   16245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:26:58.234434   16245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:26:58.235478   16245 out.go:298] Setting JSON to false
	I0610 04:26:58.251678   16245 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8789,"bootTime":1718010029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:26:58.251742   16245 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:26:58.256581   16245 out.go:177] * [test-preload-558000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:26:58.265468   16245 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:26:58.265525   16245 notify.go:220] Checking for updates...
	I0610 04:26:58.270973   16245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:26:58.274479   16245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:26:58.278410   16245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:26:58.285480   16245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:26:58.288489   16245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:26:58.291749   16245 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:26:58.291800   16245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:26:58.294443   16245 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:26:58.301519   16245 start.go:297] selected driver: qemu2
	I0610 04:26:58.301525   16245 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:26:58.301530   16245 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:26:58.303747   16245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:26:58.305193   16245 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:26:58.308562   16245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:26:58.308590   16245 cni.go:84] Creating CNI manager for ""
	I0610 04:26:58.308597   16245 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:26:58.308601   16245 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:26:58.308627   16245 start.go:340] cluster config:
	{Name:test-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:26:58.313137   16245 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.320381   16245 out.go:177] * Starting "test-preload-558000" primary control-plane node in "test-preload-558000" cluster
	I0610 04:26:58.324458   16245 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0610 04:26:58.324552   16245 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/test-preload-558000/config.json ...
	I0610 04:26:58.324568   16245 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/test-preload-558000/config.json: {Name:mk75a965190f7f9cb0dde34454185c295a7c56c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:26:58.324603   16245 cache.go:107] acquiring lock: {Name:mk2c43a349319889823e75fa1fc400c571cc7a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.324606   16245 cache.go:107] acquiring lock: {Name:mk33f064af4e3d41101c9b3f5a22dd8ab8835dbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.324603   16245 cache.go:107] acquiring lock: {Name:mk161fc2971941439e2d4aca95c678453187fd03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.324732   16245 cache.go:107] acquiring lock: {Name:mkc4b0c549a066dc25f8cfb1630349f6aa107fc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.324817   16245 cache.go:107] acquiring lock: {Name:mk37b5d5ba9f81d474ecc5068ec4f52b5aad47b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.324825   16245 cache.go:107] acquiring lock: {Name:mk16c74beb145346c763d6b06ee9bf7c47679d7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.324835   16245 cache.go:107] acquiring lock: {Name:mkcd66692a43d86c1c7c286a107e42d0101725fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.324847   16245 cache.go:107] acquiring lock: {Name:mkc8ef720ead01345abd8b28f35e5f0045a24df9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:26:58.324927   16245 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0610 04:26:58.324982   16245 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0610 04:26:58.324997   16245 start.go:360] acquireMachinesLock for test-preload-558000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:26:58.324932   16245 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0610 04:26:58.325084   16245 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 04:26:58.325129   16245 start.go:364] duration metric: took 111.75µs to acquireMachinesLock for "test-preload-558000"
	I0610 04:26:58.325141   16245 start.go:93] Provisioning new machine with config: &{Name:test-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:26:58.325170   16245 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:26:58.325180   16245 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:26:58.325189   16245 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:26:58.328500   16245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:26:58.325215   16245 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0610 04:26:58.325181   16245 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 04:26:58.335103   16245 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0610 04:26:58.335125   16245 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0610 04:26:58.335164   16245 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 04:26:58.335229   16245 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0610 04:26:58.335404   16245 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:26:58.335659   16245 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:26:58.336978   16245 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0610 04:26:58.337298   16245 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 04:26:58.345650   16245 start.go:159] libmachine.API.Create for "test-preload-558000" (driver="qemu2")
	I0610 04:26:58.345673   16245 client.go:168] LocalClient.Create starting
	I0610 04:26:58.345751   16245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:26:58.345779   16245 main.go:141] libmachine: Decoding PEM data...
	I0610 04:26:58.345793   16245 main.go:141] libmachine: Parsing certificate...
	I0610 04:26:58.345838   16245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:26:58.345861   16245 main.go:141] libmachine: Decoding PEM data...
	I0610 04:26:58.345870   16245 main.go:141] libmachine: Parsing certificate...
	I0610 04:26:58.346234   16245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:26:58.498974   16245 main.go:141] libmachine: Creating SSH key...
	I0610 04:26:58.651495   16245 main.go:141] libmachine: Creating Disk image...
	I0610 04:26:58.651516   16245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:26:58.651728   16245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 04:26:58.664342   16245 main.go:141] libmachine: STDOUT: 
	I0610 04:26:58.664374   16245 main.go:141] libmachine: STDERR: 
	I0610 04:26:58.664430   16245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2 +20000M
	I0610 04:26:58.675497   16245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:26:58.675516   16245 main.go:141] libmachine: STDERR: 
	I0610 04:26:58.675527   16245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 04:26:58.675532   16245 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:26:58.675556   16245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:9e:ca:b2:e5:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 04:26:58.677326   16245 main.go:141] libmachine: STDOUT: 
	I0610 04:26:58.677350   16245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:26:58.677371   16245 client.go:171] duration metric: took 331.689917ms to LocalClient.Create
	I0610 04:26:59.183409   16245 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 04:26:59.226626   16245 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0610 04:26:59.249901   16245 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0610 04:26:59.318199   16245 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0610 04:26:59.318243   16245 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 993.587541ms
	I0610 04:26:59.318270   16245 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0610 04:26:59.374543   16245 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0610 04:26:59.377728   16245 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0610 04:26:59.416731   16245 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 04:26:59.416817   16245 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 04:26:59.423721   16245 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0610 04:26:59.528591   16245 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 04:26:59.528702   16245 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 04:27:00.279127   16245 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 04:27:00.279227   16245 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.95460725s
	I0610 04:27:00.279265   16245 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 04:27:00.677652   16245 start.go:128] duration metric: took 2.352432625s to createHost
	I0610 04:27:00.677727   16245 start.go:83] releasing machines lock for "test-preload-558000", held for 2.352572125s
	W0610 04:27:00.677790   16245 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:00.688831   16245 out.go:177] * Deleting "test-preload-558000" in qemu2 ...
	W0610 04:27:00.722223   16245 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:00.722253   16245 start.go:728] Will try again in 5 seconds ...
	I0610 04:27:01.158053   16245 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0610 04:27:01.158108   16245 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.833312958s
	I0610 04:27:01.158146   16245 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0610 04:27:01.840807   16245 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0610 04:27:01.840857   16245 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.51598125s
	I0610 04:27:01.840881   16245 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0610 04:27:03.088428   16245 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0610 04:27:03.088523   16245 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.763886125s
	I0610 04:27:03.088558   16245 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0610 04:27:03.678024   16245 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0610 04:27:03.678073   16245 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.353437583s
	I0610 04:27:03.678105   16245 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0610 04:27:05.048344   16245 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0610 04:27:05.048393   16245 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.723543583s
	I0610 04:27:05.048424   16245 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0610 04:27:05.722444   16245 start.go:360] acquireMachinesLock for test-preload-558000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:27:05.722858   16245 start.go:364] duration metric: took 338.667µs to acquireMachinesLock for "test-preload-558000"
	I0610 04:27:05.722981   16245 start.go:93] Provisioning new machine with config: &{Name:test-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:27:05.723238   16245 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:27:05.731728   16245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:27:05.780945   16245 start.go:159] libmachine.API.Create for "test-preload-558000" (driver="qemu2")
	I0610 04:27:05.780991   16245 client.go:168] LocalClient.Create starting
	I0610 04:27:05.781099   16245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:27:05.781165   16245 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:05.781186   16245 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:05.781278   16245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:27:05.781322   16245 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:05.781335   16245 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:05.781881   16245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:27:06.042638   16245 main.go:141] libmachine: Creating SSH key...
	I0610 04:27:06.294761   16245 main.go:141] libmachine: Creating Disk image...
	I0610 04:27:06.294773   16245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:27:06.294988   16245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 04:27:06.308351   16245 main.go:141] libmachine: STDOUT: 
	I0610 04:27:06.308369   16245 main.go:141] libmachine: STDERR: 
	I0610 04:27:06.308444   16245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2 +20000M
	I0610 04:27:06.319950   16245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:27:06.319965   16245 main.go:141] libmachine: STDERR: 
	I0610 04:27:06.319985   16245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 04:27:06.319989   16245 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:27:06.320043   16245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:76:13:11:e7:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/test-preload-558000/disk.qcow2
	I0610 04:27:06.321859   16245 main.go:141] libmachine: STDOUT: 
	I0610 04:27:06.321875   16245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:27:06.321886   16245 client.go:171] duration metric: took 540.88675ms to LocalClient.Create
	I0610 04:27:07.844470   16245 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0610 04:27:07.844575   16245 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.519696166s
	I0610 04:27:07.844604   16245 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0610 04:27:07.844634   16245 cache.go:87] Successfully saved all images to host disk.
	I0610 04:27:08.324100   16245 start.go:128] duration metric: took 2.600811417s to createHost
	I0610 04:27:08.324187   16245 start.go:83] releasing machines lock for "test-preload-558000", held for 2.601252417s
	W0610 04:27:08.324480   16245 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:08.333821   16245 out.go:177] 
	W0610 04:27:08.341763   16245 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:27:08.341788   16245 out.go:239] * 
	* 
	W0610 04:27:08.344385   16245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:27:08.353840   16245 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-06-10 04:27:08.371844 -0700 PDT m=+672.067241001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-558000 -n test-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-558000 -n test-preload-558000: exit status 7 (66.898208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-558000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-558000
--- FAIL: TestPreload (10.33s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-895000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-895000 --memory=2048 --driver=qemu2 : exit status 80 (9.841200541s)

                                                
                                                
-- stdout --
	* [scheduled-stop-895000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-895000" primary control-plane node in "scheduled-stop-895000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-895000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-895000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-895000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-895000" primary control-plane node in "scheduled-stop-895000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-895000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-895000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-06-10 04:27:18.382848 -0700 PDT m=+682.078175585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-895000 -n scheduled-stop-895000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-895000 -n scheduled-stop-895000: exit status 7 (68.266875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-895000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-895000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-895000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (13.38s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4034897784 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-250000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-250000 --memory=2600 --driver=qemu2 : exit status 80 (9.972376084s)

                                                
                                                
-- stdout --
	* [skaffold-250000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-250000" primary control-plane node in "skaffold-250000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-250000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-250000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-250000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-250000" primary control-plane node in "skaffold-250000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-250000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-250000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-06-10 04:27:31.764302 -0700 PDT m=+695.459536585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-250000 -n skaffold-250000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-250000 -n skaffold-250000: exit status 7 (63.535583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-250000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-250000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-250000
--- FAIL: TestSkaffold (13.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (636.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3110606341 start -p running-upgrade-017000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3110606341 start -p running-upgrade-017000 --memory=2200 --vm-driver=qemu2 : (1m8.575971875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-017000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-017000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m53.928927959s)

                                                
                                                
-- stdout --
	* [running-upgrade-017000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-017000" primary control-plane node in "running-upgrade-017000" cluster
	* Updating the running qemu2 "running-upgrade-017000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:29:04.190460   16595 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:29:04.190599   16595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:29:04.190602   16595 out.go:304] Setting ErrFile to fd 2...
	I0610 04:29:04.190608   16595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:29:04.190755   16595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:29:04.191863   16595 out.go:298] Setting JSON to false
	I0610 04:29:04.209096   16595 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8915,"bootTime":1718010029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:29:04.209177   16595 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:29:04.211990   16595 out.go:177] * [running-upgrade-017000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:29:04.220503   16595 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:29:04.223451   16595 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:29:04.220512   16595 notify.go:220] Checking for updates...
	I0610 04:29:04.231520   16595 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:29:04.234477   16595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:29:04.237503   16595 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:29:04.240592   16595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:29:04.243719   16595 config.go:182] Loaded profile config "running-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 04:29:04.247491   16595 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 04:29:04.250526   16595 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:29:04.254481   16595 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:29:04.261500   16595 start.go:297] selected driver: qemu2
	I0610 04:29:04.261505   16595 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53086 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 04:29:04.261551   16595 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:29:04.263786   16595 cni.go:84] Creating CNI manager for ""
	I0610 04:29:04.263802   16595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:29:04.263823   16595 start.go:340] cluster config:
	{Name:running-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53086 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 04:29:04.263877   16595 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:29:04.271544   16595 out.go:177] * Starting "running-upgrade-017000" primary control-plane node in "running-upgrade-017000" cluster
	I0610 04:29:04.275487   16595 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 04:29:04.275521   16595 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0610 04:29:04.275530   16595 cache.go:56] Caching tarball of preloaded images
	I0610 04:29:04.275577   16595 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:29:04.275582   16595 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0610 04:29:04.275633   16595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/config.json ...
	I0610 04:29:04.275977   16595 start.go:360] acquireMachinesLock for running-upgrade-017000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:29:16.685901   16595 start.go:364] duration metric: took 12.409803583s to acquireMachinesLock for "running-upgrade-017000"
	I0610 04:29:16.685954   16595 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:29:16.685992   16595 fix.go:54] fixHost starting: 
	I0610 04:29:16.687152   16595 fix.go:112] recreateIfNeeded on running-upgrade-017000: state=Running err=<nil>
	W0610 04:29:16.687173   16595 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:29:16.691719   16595 out.go:177] * Updating the running qemu2 "running-upgrade-017000" VM ...
	I0610 04:29:16.698673   16595 machine.go:94] provisionDockerMachine start ...
	I0610 04:29:16.698759   16595 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:16.698896   16595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044ce980] 0x1044d11e0 <nil>  [] 0s} localhost 53016 <nil> <nil>}
	I0610 04:29:16.698901   16595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 04:29:16.770492   16595 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-017000
	
	I0610 04:29:16.770510   16595 buildroot.go:166] provisioning hostname "running-upgrade-017000"
	I0610 04:29:16.770543   16595 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:16.770669   16595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044ce980] 0x1044d11e0 <nil>  [] 0s} localhost 53016 <nil> <nil>}
	I0610 04:29:16.770677   16595 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-017000 && echo "running-upgrade-017000" | sudo tee /etc/hostname
	I0610 04:29:16.852856   16595 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-017000
	
	I0610 04:29:16.852914   16595 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:16.853052   16595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044ce980] 0x1044d11e0 <nil>  [] 0s} localhost 53016 <nil> <nil>}
	I0610 04:29:16.853060   16595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-017000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-017000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-017000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 04:29:16.938076   16595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 04:29:16.938091   16595 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19052-14289/.minikube CaCertPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19052-14289/.minikube}
	I0610 04:29:16.938102   16595 buildroot.go:174] setting up certificates
	I0610 04:29:16.938108   16595 provision.go:84] configureAuth start
	I0610 04:29:16.938112   16595 provision.go:143] copyHostCerts
	I0610 04:29:16.938211   16595 exec_runner.go:144] found /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.pem, removing ...
	I0610 04:29:16.938217   16595 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.pem
	I0610 04:29:16.938331   16595 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.pem (1082 bytes)
	I0610 04:29:16.938504   16595 exec_runner.go:144] found /Users/jenkins/minikube-integration/19052-14289/.minikube/cert.pem, removing ...
	I0610 04:29:16.938508   16595 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19052-14289/.minikube/cert.pem
	I0610 04:29:16.938550   16595 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19052-14289/.minikube/cert.pem (1123 bytes)
	I0610 04:29:16.938650   16595 exec_runner.go:144] found /Users/jenkins/minikube-integration/19052-14289/.minikube/key.pem, removing ...
	I0610 04:29:16.938653   16595 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19052-14289/.minikube/key.pem
	I0610 04:29:16.938692   16595 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19052-14289/.minikube/key.pem (1675 bytes)
	I0610 04:29:16.938781   16595 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-017000 san=[127.0.0.1 localhost minikube running-upgrade-017000]
	I0610 04:29:17.244058   16595 provision.go:177] copyRemoteCerts
	I0610 04:29:17.244103   16595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 04:29:17.244112   16595 sshutil.go:53] new ssh client: &{IP:localhost Port:53016 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/running-upgrade-017000/id_rsa Username:docker}
	I0610 04:29:17.277955   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 04:29:17.284535   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 04:29:17.291929   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 04:29:17.299000   16595 provision.go:87] duration metric: took 360.87875ms to configureAuth
	I0610 04:29:17.299010   16595 buildroot.go:189] setting minikube options for container-runtime
	I0610 04:29:17.299112   16595 config.go:182] Loaded profile config "running-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 04:29:17.299150   16595 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:17.299243   16595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044ce980] 0x1044d11e0 <nil>  [] 0s} localhost 53016 <nil> <nil>}
	I0610 04:29:17.299247   16595 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 04:29:17.361610   16595 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 04:29:17.361622   16595 buildroot.go:70] root file system type: tmpfs
	I0610 04:29:17.361679   16595 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 04:29:17.361731   16595 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:17.361853   16595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044ce980] 0x1044d11e0 <nil>  [] 0s} localhost 53016 <nil> <nil>}
	I0610 04:29:17.361885   16595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 04:29:17.428569   16595 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 04:29:17.428624   16595 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:17.428736   16595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044ce980] 0x1044d11e0 <nil>  [] 0s} localhost 53016 <nil> <nil>}
	I0610 04:29:17.428746   16595 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 04:29:17.491957   16595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 04:29:17.491966   16595 machine.go:97] duration metric: took 793.278292ms to provisionDockerMachine
	I0610 04:29:17.491977   16595 start.go:293] postStartSetup for "running-upgrade-017000" (driver="qemu2")
	I0610 04:29:17.491984   16595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 04:29:17.492042   16595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 04:29:17.492051   16595 sshutil.go:53] new ssh client: &{IP:localhost Port:53016 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/running-upgrade-017000/id_rsa Username:docker}
	I0610 04:29:17.524549   16595 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 04:29:17.525905   16595 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 04:29:17.525911   16595 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19052-14289/.minikube/addons for local assets ...
	I0610 04:29:17.525980   16595 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19052-14289/.minikube/files for local assets ...
	I0610 04:29:17.526067   16595 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/ssl/certs/147832.pem -> 147832.pem in /etc/ssl/certs
	I0610 04:29:17.526161   16595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 04:29:17.528912   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/ssl/certs/147832.pem --> /etc/ssl/certs/147832.pem (1708 bytes)
	I0610 04:29:17.535604   16595 start.go:296] duration metric: took 43.620292ms for postStartSetup
	I0610 04:29:17.535617   16595 fix.go:56] duration metric: took 849.642959ms for fixHost
	I0610 04:29:17.535652   16595 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:17.535764   16595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044ce980] 0x1044d11e0 <nil>  [] 0s} localhost 53016 <nil> <nil>}
	I0610 04:29:17.535770   16595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 04:29:17.597461   16595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718018957.751152629
	
	I0610 04:29:17.597470   16595 fix.go:216] guest clock: 1718018957.751152629
	I0610 04:29:17.597475   16595 fix.go:229] Guest: 2024-06-10 04:29:17.751152629 -0700 PDT Remote: 2024-06-10 04:29:17.535619 -0700 PDT m=+13.364719460 (delta=215.533629ms)
	I0610 04:29:17.597492   16595 fix.go:200] guest clock delta is within tolerance: 215.533629ms
	I0610 04:29:17.597495   16595 start.go:83] releasing machines lock for "running-upgrade-017000", held for 911.554417ms
	I0610 04:29:17.597554   16595 ssh_runner.go:195] Run: cat /version.json
	I0610 04:29:17.597565   16595 sshutil.go:53] new ssh client: &{IP:localhost Port:53016 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/running-upgrade-017000/id_rsa Username:docker}
	I0610 04:29:17.597554   16595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 04:29:17.597598   16595 sshutil.go:53] new ssh client: &{IP:localhost Port:53016 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/running-upgrade-017000/id_rsa Username:docker}
	W0610 04:29:17.598235   16595 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53016: connect: connection refused
	I0610 04:29:17.598262   16595 retry.go:31] will retry after 250.640356ms: dial tcp [::1]:53016: connect: connection refused
	W0610 04:29:17.888710   16595 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0610 04:29:17.888840   16595 ssh_runner.go:195] Run: systemctl --version
	I0610 04:29:17.891714   16595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 04:29:17.894088   16595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 04:29:17.894131   16595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0610 04:29:17.897940   16595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0610 04:29:17.903826   16595 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 04:29:17.903835   16595 start.go:494] detecting cgroup driver to use...
	I0610 04:29:17.903909   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 04:29:17.910124   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0610 04:29:17.913751   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 04:29:17.917206   16595 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 04:29:17.917233   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 04:29:17.920391   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 04:29:17.923450   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 04:29:17.926141   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 04:29:17.929360   16595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 04:29:17.932487   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 04:29:17.935416   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 04:29:17.938331   16595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 04:29:17.941971   16595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 04:29:17.945220   16595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 04:29:17.947981   16595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:18.057944   16595 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 04:29:18.069821   16595 start.go:494] detecting cgroup driver to use...
	I0610 04:29:18.069914   16595 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 04:29:18.076389   16595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 04:29:18.081982   16595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 04:29:18.090869   16595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 04:29:18.096169   16595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 04:29:18.101061   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 04:29:18.106493   16595 ssh_runner.go:195] Run: which cri-dockerd
	I0610 04:29:18.107938   16595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 04:29:18.110453   16595 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 04:29:18.115766   16595 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 04:29:18.240930   16595 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 04:29:18.336358   16595 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 04:29:18.336454   16595 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 04:29:18.341550   16595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:18.446819   16595 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 04:29:35.136756   16595 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.68980175s)
	I0610 04:29:35.136822   16595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 04:29:35.142487   16595 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 04:29:35.152980   16595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 04:29:35.158596   16595 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 04:29:35.242829   16595 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 04:29:35.332636   16595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:35.419590   16595 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 04:29:35.427196   16595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 04:29:35.433345   16595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:35.524432   16595 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 04:29:35.569185   16595 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 04:29:35.569255   16595 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 04:29:35.571307   16595 start.go:562] Will wait 60s for crictl version
	I0610 04:29:35.571370   16595 ssh_runner.go:195] Run: which crictl
	I0610 04:29:35.572799   16595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 04:29:35.585659   16595 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0610 04:29:35.585726   16595 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 04:29:35.597817   16595 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 04:29:35.615340   16595 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0610 04:29:35.615408   16595 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0610 04:29:35.616689   16595 kubeadm.go:877] updating cluster {Name:running-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53086 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0610 04:29:35.616737   16595 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 04:29:35.616774   16595 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 04:29:35.627007   16595 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 04:29:35.627020   16595 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 04:29:35.627066   16595 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 04:29:35.630115   16595 ssh_runner.go:195] Run: which lz4
	I0610 04:29:35.631301   16595 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 04:29:35.632455   16595 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 04:29:35.632466   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0610 04:29:36.331561   16595 docker.go:649] duration metric: took 700.281792ms to copy over tarball
	I0610 04:29:36.331618   16595 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 04:29:38.192133   16595 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.860488s)
	I0610 04:29:38.192147   16595 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 04:29:38.209577   16595 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 04:29:38.213874   16595 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0610 04:29:38.219073   16595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:38.303625   16595 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 04:29:39.918358   16595 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.614700792s)
	I0610 04:29:39.918462   16595 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 04:29:39.931777   16595 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 04:29:39.931785   16595 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 04:29:39.931790   16595 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 04:29:39.942407   16595 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:39.942451   16595 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:39.942505   16595 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:39.942570   16595 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:39.942648   16595 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:39.942711   16595 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 04:29:39.942774   16595 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:39.942866   16595 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:39.951032   16595 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 04:29:39.951134   16595 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:39.951153   16595 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:39.951207   16595 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:39.951211   16595 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:39.951325   16595 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:39.951339   16595 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:39.951401   16595 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:40.816146   16595 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:40.846119   16595 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0610 04:29:40.846155   16595 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:40.846248   16595 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:40.855371   16595 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:40.855843   16595 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:40.872234   16595 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0610 04:29:40.872358   16595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0610 04:29:40.887643   16595 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0610 04:29:40.887652   16595 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0610 04:29:40.887660   16595 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0610 04:29:40.887666   16595 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:40.887684   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0610 04:29:40.887666   16595 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:40.887714   16595 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:40.887746   16595 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:40.887937   16595 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0610 04:29:40.905304   16595 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0610 04:29:40.923732   16595 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0610 04:29:40.923826   16595 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 04:29:40.923940   16595 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:40.933283   16595 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0610 04:29:40.933321   16595 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0610 04:29:40.933388   16595 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0610 04:29:40.961554   16595 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0610 04:29:40.961575   16595 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:40.961635   16595 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:40.964994   16595 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 04:29:40.965096   16595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0610 04:29:40.991866   16595 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:40.993318   16595 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:40.998109   16595 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0610 04:29:40.998136   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0610 04:29:41.001101   16595 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 04:29:41.001202   16595 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:41.025400   16595 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0610 04:29:41.025420   16595 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:41.025480   16595 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:41.028811   16595 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0610 04:29:41.028830   16595 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:41.028881   16595 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:41.032559   16595 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0610 04:29:41.032571   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0610 04:29:41.063828   16595 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0610 04:29:41.063856   16595 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:41.063920   16595 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:41.088409   16595 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0610 04:29:41.088461   16595 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0610 04:29:41.133307   16595 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0610 04:29:41.133317   16595 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 04:29:41.133432   16595 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0610 04:29:41.143761   16595 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0610 04:29:41.143788   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0610 04:29:41.194482   16595 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0610 04:29:41.194497   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0610 04:29:41.360923   16595 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0610 04:29:41.360943   16595 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0610 04:29:41.360950   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0610 04:29:41.398678   16595 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0610 04:29:41.398717   16595 cache_images.go:92] duration metric: took 1.466907042s to LoadCachedImages
	W0610 04:29:41.398760   16595 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0610 04:29:41.398766   16595 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0610 04:29:41.398816   16595 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-017000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 04:29:41.398881   16595 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 04:29:41.412224   16595 cni.go:84] Creating CNI manager for ""
	I0610 04:29:41.412236   16595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:29:41.412240   16595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 04:29:41.412257   16595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-017000 NodeName:running-upgrade-017000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 04:29:41.412333   16595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-017000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 04:29:41.412390   16595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0610 04:29:41.415196   16595 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 04:29:41.415230   16595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 04:29:41.418133   16595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0610 04:29:41.423301   16595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 04:29:41.428130   16595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0610 04:29:41.434046   16595 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0610 04:29:41.435649   16595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:41.518703   16595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 04:29:41.524736   16595 certs.go:68] Setting up /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000 for IP: 10.0.2.15
	I0610 04:29:41.524742   16595 certs.go:194] generating shared ca certs ...
	I0610 04:29:41.524750   16595 certs.go:226] acquiring lock for ca certs: {Name:mk478b348d446dde3a95549bafcb3e70b2a1a766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:29:41.524911   16595 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.key
	I0610 04:29:41.524959   16595 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/proxy-client-ca.key
	I0610 04:29:41.524964   16595 certs.go:256] generating profile certs ...
	I0610 04:29:41.525034   16595 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/client.key
	I0610 04:29:41.525054   16595 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.key.38fab258
	I0610 04:29:41.525064   16595 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.crt.38fab258 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0610 04:29:41.555225   16595 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.crt.38fab258 ...
	I0610 04:29:41.555230   16595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.crt.38fab258: {Name:mk1707d7d34738d27d5a6c3deca0382e01f0dcf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:29:41.562223   16595 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.key.38fab258 ...
	I0610 04:29:41.562232   16595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.key.38fab258: {Name:mk9869186fde8b0bed46bdd4d7b72bf67dfd7c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:29:41.562397   16595 certs.go:381] copying /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.crt.38fab258 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.crt
	I0610 04:29:41.562535   16595 certs.go:385] copying /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.key.38fab258 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.key
	I0610 04:29:41.562683   16595 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/proxy-client.key
	I0610 04:29:41.562808   16595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/14783.pem (1338 bytes)
	W0610 04:29:41.562839   16595 certs.go:480] ignoring /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/14783_empty.pem, impossibly tiny 0 bytes
	I0610 04:29:41.562845   16595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 04:29:41.562871   16595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem (1082 bytes)
	I0610 04:29:41.562898   16595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem (1123 bytes)
	I0610 04:29:41.562923   16595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/key.pem (1675 bytes)
	I0610 04:29:41.562974   16595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/ssl/certs/147832.pem (1708 bytes)
	I0610 04:29:41.563323   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 04:29:41.570714   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 04:29:41.578063   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 04:29:41.585578   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 04:29:41.592609   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 04:29:41.600214   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 04:29:41.606920   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 04:29:41.614121   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 04:29:41.621345   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 04:29:41.628363   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/14783.pem --> /usr/share/ca-certificates/14783.pem (1338 bytes)
	I0610 04:29:41.635237   16595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/ssl/certs/147832.pem --> /usr/share/ca-certificates/147832.pem (1708 bytes)
	I0610 04:29:41.642459   16595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 04:29:41.647422   16595 ssh_runner.go:195] Run: openssl version
	I0610 04:29:41.649242   16595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147832.pem && ln -fs /usr/share/ca-certificates/147832.pem /etc/ssl/certs/147832.pem"
	I0610 04:29:41.652175   16595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147832.pem
	I0610 04:29:41.653819   16595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 11:16 /usr/share/ca-certificates/147832.pem
	I0610 04:29:41.653841   16595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147832.pem
	I0610 04:29:41.655657   16595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147832.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 04:29:41.658361   16595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 04:29:41.661687   16595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 04:29:41.663153   16595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0610 04:29:41.663171   16595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 04:29:41.665033   16595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 04:29:41.667684   16595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14783.pem && ln -fs /usr/share/ca-certificates/14783.pem /etc/ssl/certs/14783.pem"
	I0610 04:29:41.670726   16595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14783.pem
	I0610 04:29:41.672366   16595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 11:16 /usr/share/ca-certificates/14783.pem
	I0610 04:29:41.672382   16595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14783.pem
	I0610 04:29:41.674074   16595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14783.pem /etc/ssl/certs/51391683.0"
	I0610 04:29:41.677142   16595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 04:29:41.678559   16595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 04:29:41.680631   16595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 04:29:41.683315   16595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 04:29:41.685190   16595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 04:29:41.687278   16595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 04:29:41.689057   16595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 04:29:41.690796   16595 kubeadm.go:391] StartCluster: {Name:running-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53086 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 04:29:41.690871   16595 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 04:29:41.701529   16595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 04:29:41.705076   16595 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 04:29:41.705083   16595 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 04:29:41.705086   16595 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 04:29:41.705108   16595 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 04:29:41.708005   16595 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 04:29:41.708297   16595 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-017000" does not appear in /Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:29:41.708395   16595 kubeconfig.go:62] /Users/jenkins/minikube-integration/19052-14289/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-017000" cluster setting kubeconfig missing "running-upgrade-017000" context setting]
	I0610 04:29:41.708592   16595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/kubeconfig: {Name:mke1ab156d45cd5cbace7e8cb5713141e8116718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:29:41.709016   16595 kapi.go:59] client config for running-upgrade-017000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/client.key", CAFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10585c460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 04:29:41.709341   16595 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 04:29:41.711933   16595 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-017000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0610 04:29:41.711939   16595 kubeadm.go:1154] stopping kube-system containers ...
	I0610 04:29:41.711980   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 04:29:41.723695   16595 docker.go:483] Stopping containers: [8e8162120f30 b2715eb24923 7292316f71e4 3a7ed9828586 4c91655d93d0 3c5ee8838e9c d0e6a07e77d4 4a55c2e22ae2 ff491cc45707 e670bbe5f487 10016dd527d4 dda740075c89 3c27685a0548 610b1442bd32 9b9bdaa6f4c3 41c63d55e752 c798d11a8391 29cd866ed5ab c8723c14899d 988d568178df b5927af36d5a]
	I0610 04:29:41.723763   16595 ssh_runner.go:195] Run: docker stop 8e8162120f30 b2715eb24923 7292316f71e4 3a7ed9828586 4c91655d93d0 3c5ee8838e9c d0e6a07e77d4 4a55c2e22ae2 ff491cc45707 e670bbe5f487 10016dd527d4 dda740075c89 3c27685a0548 610b1442bd32 9b9bdaa6f4c3 41c63d55e752 c798d11a8391 29cd866ed5ab c8723c14899d 988d568178df b5927af36d5a
	I0610 04:29:41.735391   16595 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 04:29:41.837874   16595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 04:29:41.841865   16595 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Jun 10 11:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun 10 11:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jun 10 11:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 10 11:28 /etc/kubernetes/scheduler.conf
	
	I0610 04:29:41.841907   16595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/admin.conf
	I0610 04:29:41.844491   16595 kubeadm.go:162] "https://control-plane.minikube.internal:53086" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 04:29:41.844517   16595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 04:29:41.847268   16595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/kubelet.conf
	I0610 04:29:41.850168   16595 kubeadm.go:162] "https://control-plane.minikube.internal:53086" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 04:29:41.850197   16595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 04:29:41.853651   16595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/controller-manager.conf
	I0610 04:29:41.856756   16595 kubeadm.go:162] "https://control-plane.minikube.internal:53086" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 04:29:41.856778   16595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 04:29:41.859530   16595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/scheduler.conf
	I0610 04:29:41.862468   16595 kubeadm.go:162] "https://control-plane.minikube.internal:53086" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 04:29:41.862489   16595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 04:29:41.865770   16595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 04:29:41.868947   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:41.913365   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:42.350491   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:42.619848   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:42.656012   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:42.689794   16595 api_server.go:52] waiting for apiserver process to appear ...
	I0610 04:29:42.689866   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:43.191922   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:43.691908   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:44.191951   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:44.691933   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:45.191929   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:45.691954   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:46.191981   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:46.691962   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:47.190345   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:47.194658   16595 api_server.go:72] duration metric: took 4.504831958s to wait for apiserver process to appear ...
	I0610 04:29:47.194669   16595 api_server.go:88] waiting for apiserver healthz status ...
	I0610 04:29:47.194680   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:29:52.196779   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:29:52.196800   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:29:57.197041   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:29:57.197087   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:02.197533   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:02.197558   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:07.198067   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:07.198140   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:12.199085   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:12.199142   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:17.200146   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:17.200206   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:22.201398   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:22.201463   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:27.202031   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:27.202069   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:32.203684   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:32.203730   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:37.205779   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:37.205824   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:42.207647   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:42.207696   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:47.210035   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:47.210388   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:30:47.245687   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:30:47.245857   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:30:47.267161   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:30:47.267278   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:30:47.281693   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:30:47.281776   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:30:47.294084   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:30:47.294159   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:30:47.305276   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:30:47.305355   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:30:47.317082   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:30:47.317148   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:30:47.330317   16595 logs.go:276] 0 containers: []
	W0610 04:30:47.330329   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:30:47.330392   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:30:47.341181   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:30:47.341200   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:30:47.341205   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:30:47.379838   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:30:47.379848   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:30:47.478741   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:30:47.478761   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:30:47.494964   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:30:47.494978   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:30:47.509311   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:30:47.509324   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:30:47.536126   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:30:47.536133   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:30:47.540496   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:30:47.540502   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:30:47.559392   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:30:47.559404   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:30:47.571336   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:30:47.571345   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:30:47.583319   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:30:47.583330   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:30:47.594716   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:30:47.594728   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:30:47.610881   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:30:47.610894   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:30:47.622859   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:30:47.622870   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:30:47.634292   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:30:47.634301   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:30:47.651447   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:30:47.651460   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:30:47.663091   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:30:47.663106   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:30:47.674547   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:30:47.674559   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:30:47.686072   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:30:47.686084   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:30:47.697312   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:30:47.697325   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:30:50.211363   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:55.213674   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:55.214024   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:30:55.253211   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:30:55.253355   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:30:55.273997   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:30:55.274103   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:30:55.289713   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:30:55.289807   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:30:55.302127   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:30:55.302195   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:30:55.313295   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:30:55.313372   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:30:55.325313   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:30:55.325405   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:30:55.335588   16595 logs.go:276] 0 containers: []
	W0610 04:30:55.335599   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:30:55.335661   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:30:55.356219   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:30:55.356236   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:30:55.356242   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:30:55.375345   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:30:55.375356   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:30:55.386963   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:30:55.386975   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:30:55.398803   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:30:55.398814   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:30:55.417361   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:30:55.417374   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:30:55.456562   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:30:55.456573   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:30:55.497590   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:30:55.497607   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:30:55.516555   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:30:55.516571   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:30:55.527765   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:30:55.527782   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:30:55.539740   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:30:55.539752   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:30:55.565674   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:30:55.565683   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:30:55.569916   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:30:55.569923   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:30:55.584142   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:30:55.584153   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:30:55.595892   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:30:55.595902   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:30:55.617512   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:30:55.617522   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:30:55.629543   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:30:55.629554   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:30:55.642211   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:30:55.642223   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:30:55.655857   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:30:55.655867   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:30:55.667667   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:30:55.667678   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:30:58.182471   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:03.185081   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:03.185540   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:03.226242   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:31:03.226386   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:03.249208   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:31:03.249315   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:03.264712   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:31:03.264793   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:03.277233   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:31:03.277303   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:03.288173   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:31:03.288250   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:03.299030   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:31:03.299107   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:03.310806   16595 logs.go:276] 0 containers: []
	W0610 04:31:03.310818   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:03.310872   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:03.321259   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:31:03.321275   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:31:03.321281   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:31:03.332667   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:31:03.332679   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:31:03.344540   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:31:03.344551   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:31:03.360909   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:03.360920   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:03.396974   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:31:03.396986   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:31:03.410792   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:03.410803   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:03.437241   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:31:03.437248   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:31:03.451026   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:31:03.451037   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:31:03.470363   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:31:03.470375   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:31:03.481911   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:31:03.481923   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:31:03.495161   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:31:03.495172   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:31:03.506881   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:31:03.506892   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:31:03.518413   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:31:03.518427   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:03.532001   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:03.532011   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:03.574813   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:31:03.574822   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:31:03.588967   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:31:03.588977   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:31:03.600410   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:31:03.600422   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:31:03.612839   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:03.612851   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:03.617330   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:31:03.617338   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:31:06.132557   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:11.134988   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:11.135281   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:11.166230   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:31:11.166357   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:11.184880   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:31:11.184979   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:11.198494   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:31:11.198577   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:11.210105   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:31:11.210184   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:11.220753   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:31:11.220822   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:11.231684   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:31:11.231759   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:11.241776   16595 logs.go:276] 0 containers: []
	W0610 04:31:11.241788   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:11.241843   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:11.252248   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:31:11.252263   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:11.252271   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:11.288818   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:31:11.288829   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:31:11.303944   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:31:11.303956   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:31:11.317153   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:31:11.317163   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:31:11.328644   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:31:11.328656   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:31:11.340442   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:31:11.340454   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:31:11.355901   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:31:11.355911   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:31:11.367475   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:11.367485   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:11.394117   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:31:11.394125   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:31:11.406615   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:31:11.406626   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:11.418676   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:31:11.418685   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:31:11.433841   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:11.433853   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:11.438059   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:31:11.438067   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:31:11.452376   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:31:11.452388   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:31:11.463628   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:31:11.463639   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:31:11.477980   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:31:11.477991   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:31:11.489227   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:31:11.489241   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:31:11.509430   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:31:11.509442   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:31:11.520969   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:11.520980   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:14.060548   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:19.062838   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:19.063107   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:19.083635   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:31:19.083734   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:19.102064   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:31:19.102135   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:19.113344   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:31:19.113406   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:19.123753   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:31:19.123822   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:19.134247   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:31:19.134305   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:19.146608   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:31:19.146681   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:19.156907   16595 logs.go:276] 0 containers: []
	W0610 04:31:19.156922   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:19.156980   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:19.167196   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:31:19.167212   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:31:19.167218   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:31:19.181258   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:31:19.181269   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:31:19.194365   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:31:19.195807   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:31:19.212556   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:31:19.212566   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:31:19.228278   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:31:19.228290   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:19.240916   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:19.240928   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:19.245334   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:31:19.245341   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:31:19.262229   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:31:19.262239   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:31:19.278431   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:31:19.278443   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:31:19.289776   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:31:19.289787   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:31:19.300914   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:31:19.300925   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:31:19.312357   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:19.312368   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:19.351668   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:31:19.351678   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:31:19.363744   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:31:19.363756   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:31:19.381973   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:31:19.381983   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:31:19.392794   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:19.392805   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:19.417653   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:19.417660   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:19.454079   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:31:19.454102   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:31:19.467433   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:31:19.467445   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:31:21.980177   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:26.982486   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:26.982879   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:27.023278   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:31:27.023415   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:27.042949   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:31:27.043045   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:27.057636   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:31:27.057726   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:27.073946   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:31:27.074014   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:27.084492   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:31:27.084562   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:27.095783   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:31:27.095864   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:27.106189   16595 logs.go:276] 0 containers: []
	W0610 04:31:27.106200   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:27.106254   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:27.116858   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:31:27.116876   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:27.116881   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:27.157566   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:27.157573   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:27.161711   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:31:27.161721   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:31:27.173222   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:31:27.173235   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:31:27.184675   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:31:27.184687   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:31:27.198592   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:31:27.198602   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:31:27.213002   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:31:27.213016   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:31:27.226455   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:31:27.226470   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:31:27.238373   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:31:27.238384   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:31:27.249368   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:31:27.249381   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:31:27.261140   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:31:27.261149   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:31:27.280306   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:27.280317   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:27.306094   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:31:27.306102   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:31:27.318079   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:31:27.318089   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:31:27.329846   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:31:27.329858   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:27.342104   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:27.342115   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:27.378169   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:31:27.378181   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:31:27.395939   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:31:27.395951   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:31:27.407090   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:31:27.407101   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:31:29.920633   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:34.922881   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:34.923104   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:34.942236   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:31:34.942334   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:34.956403   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:31:34.956483   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:34.967965   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:31:34.968031   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:34.983057   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:31:34.983134   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:34.993365   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:31:34.993435   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:35.003523   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:31:35.003595   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:35.018047   16595 logs.go:276] 0 containers: []
	W0610 04:31:35.018063   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:35.018128   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:35.028859   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:31:35.028875   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:35.028881   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:35.054581   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:31:35.054588   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:31:35.067650   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:31:35.067659   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:31:35.078671   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:31:35.078683   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:31:35.090236   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:35.090247   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:35.126108   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:31:35.126117   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:31:35.141843   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:31:35.141852   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:31:35.153811   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:31:35.153821   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:31:35.165420   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:31:35.165431   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:31:35.182217   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:31:35.182225   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:35.194978   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:35.194993   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:35.199928   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:31:35.199936   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:31:35.213140   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:31:35.213149   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:31:35.224497   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:31:35.224513   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:31:35.235820   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:31:35.235832   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:31:35.247183   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:31:35.247194   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:31:35.258652   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:35.258664   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:35.297904   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:31:35.297915   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:31:35.316535   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:31:35.316553   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:31:37.832801   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:42.835044   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:42.835244   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:42.847409   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:31:42.847485   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:42.857735   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:31:42.857808   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:42.874198   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:31:42.874270   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:42.889354   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:31:42.889422   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:42.900362   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:31:42.900426   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:42.910974   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:31:42.911040   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:42.925536   16595 logs.go:276] 0 containers: []
	W0610 04:31:42.925551   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:42.925609   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:42.939619   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:31:42.939637   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:42.939643   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:42.944508   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:31:42.944518   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:31:42.957999   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:31:42.958008   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:31:42.969463   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:31:42.969474   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:31:42.986600   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:31:42.986611   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:31:42.997801   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:42.997813   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:43.036164   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:31:43.036175   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:31:43.050270   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:31:43.050281   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:31:43.063797   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:31:43.063809   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:31:43.075315   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:31:43.075327   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:31:43.087608   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:31:43.087618   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:31:43.099399   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:31:43.099410   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:31:43.111056   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:31:43.111067   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:31:43.122468   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:31:43.122481   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:31:43.134196   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:43.134209   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:43.158304   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:43.158312   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:43.196224   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:31:43.196232   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:31:43.208899   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:31:43.208912   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:31:43.220257   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:31:43.220269   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:45.734462   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:50.734938   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:50.735083   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:50.746359   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:31:50.746432   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:50.756746   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:31:50.756824   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:50.768018   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:31:50.768093   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:50.778432   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:31:50.778509   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:50.788824   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:31:50.788897   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:50.799467   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:31:50.799540   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:50.810791   16595 logs.go:276] 0 containers: []
	W0610 04:31:50.810802   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:50.810865   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:50.825395   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:31:50.825414   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:50.825419   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:50.829621   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:50.829630   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:50.869592   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:31:50.869604   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:31:50.882434   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:31:50.882445   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:31:50.897360   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:31:50.897370   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:31:50.908492   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:31:50.908504   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:31:50.920136   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:31:50.920148   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:31:50.931299   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:31:50.931311   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:31:50.949211   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:50.949221   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:50.990133   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:31:50.990142   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:31:51.002187   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:31:51.002199   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:31:51.013155   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:31:51.013166   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:31:51.026923   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:31:51.026933   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:31:51.046318   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:31:51.046336   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:31:51.062560   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:31:51.062571   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:31:51.078698   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:31:51.078707   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:31:51.095917   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:31:51.095927   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:31:51.107479   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:51.107494   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:51.132531   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:31:51.132539   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:53.646646   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:58.648947   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:58.649220   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:58.675366   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:31:58.675501   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:58.691997   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:31:58.692074   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:58.706370   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:31:58.706448   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:58.718005   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:31:58.718075   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:58.732131   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:31:58.732200   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:58.743142   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:31:58.743222   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:58.755009   16595 logs.go:276] 0 containers: []
	W0610 04:31:58.755020   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:58.755080   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:58.766207   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:31:58.766223   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:31:58.766230   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:31:58.777947   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:31:58.777959   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:31:58.794486   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:31:58.794497   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:31:58.805664   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:58.805675   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:58.846791   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:58.846810   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:58.851596   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:58.851605   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:58.891033   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:31:58.891043   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:31:58.905135   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:31:58.905146   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:31:58.916414   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:31:58.916424   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:31:58.927292   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:31:58.927303   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:58.939511   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:31:58.939522   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:31:58.953565   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:31:58.953575   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:31:58.967123   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:31:58.967134   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:31:58.978461   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:31:58.978471   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:31:58.995233   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:31:58.995246   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:31:59.006237   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:31:59.006251   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:31:59.019000   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:31:59.019010   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:31:59.030279   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:59.030293   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:59.054520   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:31:59.054527   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:32:01.568052   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:06.570399   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:06.570657   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:06.594863   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:32:06.594980   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:06.610868   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:32:06.610959   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:06.624862   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:32:06.624937   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:06.636155   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:32:06.636226   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:06.646261   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:32:06.646332   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:06.657046   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:32:06.657125   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:06.667349   16595 logs.go:276] 0 containers: []
	W0610 04:32:06.667361   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:06.667419   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:06.677213   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:32:06.677231   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:06.677237   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:06.713041   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:32:06.713052   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:32:06.726885   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:32:06.726896   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:32:06.739513   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:32:06.739524   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:32:06.751332   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:32:06.751343   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:32:06.763710   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:32:06.763721   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:32:06.778337   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:32:06.778349   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:32:06.790677   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:06.790688   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:06.795108   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:32:06.795115   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:32:06.808692   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:32:06.808702   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:32:06.825678   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:32:06.825688   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:32:06.838652   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:32:06.838663   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:32:06.853338   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:32:06.853351   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:06.865469   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:06.865480   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:06.905834   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:32:06.905847   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:32:06.917346   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:32:06.917360   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:32:06.928475   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:32:06.928487   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:32:06.940603   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:32:06.940617   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:32:06.951512   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:06.951521   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:09.478085   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:14.480415   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:14.480802   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:14.516990   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:32:14.517144   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:14.536855   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:32:14.536952   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:14.550953   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:32:14.551030   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:14.563435   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:32:14.563498   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:14.573843   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:32:14.573911   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:14.585342   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:32:14.585414   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:14.596129   16595 logs.go:276] 0 containers: []
	W0610 04:32:14.596149   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:14.596221   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:14.606464   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:32:14.606485   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:32:14.606491   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:32:14.618922   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:32:14.618933   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:32:14.633112   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:32:14.633128   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:32:14.645098   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:32:14.645110   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:32:14.656409   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:14.656422   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:14.679862   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:14.679869   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:14.715369   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:32:14.715380   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:32:14.729188   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:32:14.729200   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:32:14.740762   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:32:14.740772   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:32:14.752075   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:14.752090   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:14.756899   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:32:14.756910   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:32:14.769672   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:32:14.769686   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:32:14.781823   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:32:14.781834   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:32:14.797695   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:32:14.797705   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:14.809328   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:14.809340   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:14.851915   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:32:14.851931   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:32:14.877096   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:32:14.877107   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:32:14.889379   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:32:14.889391   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:32:14.900620   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:32:14.900631   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:32:17.418807   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:22.421149   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:22.421500   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:22.460629   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:32:22.460782   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:22.483145   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:32:22.483243   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:22.506621   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:32:22.506697   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:22.517959   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:32:22.518038   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:22.528425   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:32:22.528490   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:22.541539   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:32:22.541618   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:22.551971   16595 logs.go:276] 0 containers: []
	W0610 04:32:22.551982   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:22.552045   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:22.562450   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:32:22.562464   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:22.562469   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:22.601657   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:22.601665   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:22.605886   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:32:22.605895   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:32:22.619280   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:22.619292   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:22.642106   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:32:22.642114   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:32:22.653796   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:32:22.653807   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:22.665334   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:22.665344   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:22.700223   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:32:22.700235   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:32:22.715235   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:32:22.715245   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:32:22.727978   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:32:22.727989   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:32:22.743340   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:32:22.743350   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:32:22.755525   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:32:22.755534   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:32:22.773023   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:32:22.773034   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:32:22.786364   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:32:22.786374   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:32:22.797536   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:32:22.797548   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:32:22.809017   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:32:22.809029   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:32:22.820687   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:32:22.820698   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:32:22.834626   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:32:22.834638   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:32:22.847176   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:32:22.847190   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:32:25.361003   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:30.363278   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:30.363558   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:30.388940   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:32:30.389070   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:30.410407   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:32:30.410496   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:30.423277   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:32:30.423346   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:30.434089   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:32:30.434164   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:30.444784   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:32:30.444858   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:30.455800   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:32:30.455880   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:30.466169   16595 logs.go:276] 0 containers: []
	W0610 04:32:30.466183   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:30.466245   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:30.479974   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:32:30.479989   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:30.479994   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:30.519262   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:32:30.519278   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:32:30.531921   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:32:30.531931   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:32:30.547358   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:30.547368   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:30.571543   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:30.571554   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:30.575990   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:32:30.575995   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:32:30.600454   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:32:30.600464   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:32:30.611914   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:32:30.611926   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:32:30.628661   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:32:30.628672   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:32:30.640288   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:30.640302   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:30.681210   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:32:30.681219   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:32:30.694168   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:32:30.694182   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:32:30.709099   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:32:30.709110   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:30.721245   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:32:30.721256   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:32:30.732992   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:32:30.733003   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:32:30.746947   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:32:30.746960   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:32:30.761678   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:32:30.761688   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:32:30.773913   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:32:30.773924   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:32:30.785997   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:32:30.786007   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:32:33.298994   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:38.301249   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:38.301457   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:38.322289   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:32:38.322396   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:38.339113   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:32:38.339192   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:38.350812   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:32:38.350882   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:38.361338   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:32:38.361400   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:38.371747   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:32:38.371804   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:38.382326   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:32:38.382392   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:38.392422   16595 logs.go:276] 0 containers: []
	W0610 04:32:38.392436   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:38.392496   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:38.403090   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:32:38.403105   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:38.403113   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:38.407367   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:32:38.407375   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:32:38.421984   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:32:38.421997   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:32:38.433388   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:32:38.433401   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:32:38.445261   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:32:38.445274   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:32:38.458108   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:32:38.458121   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:32:38.474750   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:32:38.474764   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:32:38.491143   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:38.491154   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:38.527785   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:32:38.527796   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:32:38.544518   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:32:38.544528   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:32:38.556478   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:32:38.556490   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:32:38.567934   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:32:38.567944   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:32:38.578907   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:38.578919   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:38.603432   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:32:38.603440   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:38.615760   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:38.615773   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:38.656272   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:32:38.656280   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:32:38.669736   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:32:38.669749   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:32:38.683350   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:32:38.683361   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:32:38.694257   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:32:38.694268   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:32:41.206799   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:46.209029   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:46.209127   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:46.221061   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:32:46.221142   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:46.232462   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:32:46.232538   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:46.243307   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:32:46.243378   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:46.254078   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:32:46.254152   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:46.264613   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:32:46.264680   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:46.275090   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:32:46.275159   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:46.284982   16595 logs.go:276] 0 containers: []
	W0610 04:32:46.284993   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:46.285046   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:46.295813   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:32:46.295830   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:32:46.295835   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:32:46.307276   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:32:46.307288   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:32:46.318177   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:32:46.318189   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:32:46.329136   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:46.329150   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:46.367948   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:46.367956   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:46.372116   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:32:46.372122   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:32:46.386512   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:32:46.386521   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:32:46.404349   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:32:46.404358   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:32:46.416202   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:32:46.416213   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:32:46.427373   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:32:46.427385   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:32:46.440884   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:32:46.440896   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:32:46.453181   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:46.453192   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:46.476306   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:32:46.476316   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:46.488782   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:46.488791   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:46.532428   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:32:46.532438   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:32:46.550867   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:32:46.550883   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:32:46.563252   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:32:46.563266   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:32:46.574815   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:32:46.574826   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:32:46.590421   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:32:46.590434   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:32:49.104053   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:54.106366   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:54.106731   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:54.156296   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:32:54.156395   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:54.179642   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:32:54.179723   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:54.202417   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:32:54.202492   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:54.213123   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:32:54.213194   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:54.225026   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:32:54.225097   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:54.235795   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:32:54.235870   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:54.245232   16595 logs.go:276] 0 containers: []
	W0610 04:32:54.245244   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:54.245301   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:54.255540   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:32:54.255555   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:32:54.255560   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:32:54.266848   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:32:54.266859   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:32:54.283683   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:32:54.283693   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:32:54.295244   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:32:54.295257   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:54.307449   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:54.307460   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:54.347644   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:32:54.347656   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:32:54.361383   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:32:54.361392   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:32:54.372922   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:32:54.372932   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:32:54.387093   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:32:54.387103   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:32:54.408883   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:32:54.408894   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:32:54.420815   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:32:54.420830   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:32:54.432846   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:32:54.432856   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:32:54.443599   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:32:54.443610   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:32:54.455677   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:54.455687   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:54.496547   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:54.496560   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:54.501109   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:54.501118   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:54.524882   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:32:54.524895   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:32:54.536786   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:32:54.536803   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:32:54.549443   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:32:54.549454   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:32:57.063872   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:02.066086   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:02.066304   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:02.089703   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:33:02.089808   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:02.104491   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:33:02.104566   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:02.117052   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:33:02.117128   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:02.127814   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:33:02.127881   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:02.138051   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:33:02.138110   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:02.152536   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:33:02.152604   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:02.163716   16595 logs.go:276] 0 containers: []
	W0610 04:33:02.163728   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:02.163782   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:02.174406   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:33:02.174420   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:02.174426   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:02.210486   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:33:02.210497   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:33:02.224380   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:33:02.224395   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:33:02.236652   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:33:02.236665   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:33:02.247861   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:33:02.247873   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:33:02.260656   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:33:02.260667   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:33:02.274708   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:33:02.274720   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:33:02.286285   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:33:02.286294   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:33:02.298964   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:33:02.298974   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:33:02.315948   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:33:02.315962   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:33:02.327039   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:02.327049   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:02.367395   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:33:02.367403   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:33:02.380249   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:33:02.380259   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:33:02.391575   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:02.391588   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:02.415834   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:33:02.415846   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:02.427802   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:02.427814   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:02.432328   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:33:02.432336   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:33:02.448872   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:33:02.448885   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:33:02.460871   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:33:02.460883   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:33:04.974174   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:09.976541   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:09.977015   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:10.016758   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:33:10.016896   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:10.039177   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:33:10.039283   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:10.054655   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:33:10.054737   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:10.067203   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:33:10.067276   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:10.078468   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:33:10.078544   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:10.089254   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:33:10.089318   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:10.099761   16595 logs.go:276] 0 containers: []
	W0610 04:33:10.099772   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:10.099831   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:10.110593   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:33:10.110607   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:33:10.110612   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:33:10.122200   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:33:10.122213   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:33:10.133995   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:33:10.134007   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:33:10.145860   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:33:10.145873   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:10.157922   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:33:10.157935   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:33:10.170334   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:33:10.170343   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:33:10.181799   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:33:10.181811   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:33:10.193094   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:33:10.193103   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:33:10.204116   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:10.204127   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:10.209028   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:33:10.209036   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:33:10.228990   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:33:10.229001   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:33:10.241838   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:33:10.241853   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:33:10.253866   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:10.253879   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:10.293205   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:33:10.293213   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:33:10.304666   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:33:10.304678   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:33:10.317287   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:33:10.317298   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:33:10.334286   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:10.334295   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:10.356434   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:10.356441   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:10.393309   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:33:10.393321   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:33:12.909485   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:17.911829   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:17.912325   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:17.950435   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:33:17.950576   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:17.971863   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:33:17.971975   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:17.989964   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:33:17.990042   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:18.002182   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:33:18.002243   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:18.014072   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:33:18.014143   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:18.025060   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:33:18.025144   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:18.035521   16595 logs.go:276] 0 containers: []
	W0610 04:33:18.035533   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:18.035592   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:18.047177   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:33:18.047194   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:33:18.047202   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:33:18.060433   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:33:18.060447   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:33:18.076348   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:33:18.076360   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:33:18.088263   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:33:18.088276   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:33:18.101364   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:33:18.101375   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:33:18.113732   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:18.113746   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:18.118473   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:33:18.118478   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:33:18.130944   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:33:18.130956   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:33:18.144079   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:33:18.144091   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:33:18.160750   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:33:18.160760   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:33:18.173506   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:33:18.173518   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:33:18.184933   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:18.184945   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:18.221521   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:33:18.221532   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:33:18.232740   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:33:18.232753   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:33:18.244760   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:18.244773   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:18.269183   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:33:18.269194   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:18.281304   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:33:18.281314   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:33:18.297502   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:33:18.297512   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:33:18.314170   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:18.314181   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:20.857526   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:25.859299   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:25.859452   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:25.870586   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:33:25.870657   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:25.880549   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:33:25.880614   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:25.890767   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:33:25.890830   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:25.900724   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:33:25.900793   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:25.911421   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:33:25.911486   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:25.921941   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:33:25.922012   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:25.932584   16595 logs.go:276] 0 containers: []
	W0610 04:33:25.932595   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:25.932649   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:25.942782   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:33:25.942799   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:25.942805   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:25.947144   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:33:25.947153   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:33:25.960947   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:33:25.960960   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:33:25.971904   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:25.971914   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:26.010161   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:33:26.010171   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:33:26.029541   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:33:26.029552   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:33:26.040423   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:33:26.040434   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:33:26.052484   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:33:26.052497   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:33:26.064648   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:33:26.064661   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:33:26.077885   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:33:26.077899   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:33:26.089468   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:33:26.089480   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:33:26.101200   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:33:26.101212   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:33:26.112532   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:33:26.112546   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:26.124983   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:26.124995   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:26.159779   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:33:26.159789   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:33:26.172121   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:33:26.172131   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:33:26.183099   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:33:26.183109   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:33:26.195205   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:33:26.195215   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:33:26.212562   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:26.212573   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:28.737078   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:33.739370   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:33.739514   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:33.750888   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:33:33.750959   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:33.763339   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:33:33.763402   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:33.781052   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:33:33.781124   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:33.793157   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:33:33.793284   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:33.809457   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:33:33.809524   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:33.822113   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:33:33.822181   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:33.833142   16595 logs.go:276] 0 containers: []
	W0610 04:33:33.833152   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:33.833211   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:33.844183   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:33:33.844199   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:33:33.844206   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:33:33.857097   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:33:33.857109   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:33:33.870408   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:33:33.870423   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:33:33.886320   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:33:33.886334   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:33:33.900240   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:33:33.900255   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:33:33.915187   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:33:33.915202   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:33:33.926691   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:33:33.926708   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:33:33.938939   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:33.938951   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:33.961738   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:33:33.961749   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:33.975895   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:33.975907   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:34.017944   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:34.017956   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:34.022754   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:33:34.022762   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:33:34.034361   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:33:34.034372   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:33:34.051365   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:33:34.051376   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:33:34.063808   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:33:34.063820   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:33:34.075564   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:34.075576   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:34.111158   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:33:34.111171   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:33:34.126952   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:33:34.126963   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:33:34.143203   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:33:34.143212   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:33:36.657039   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:41.659279   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:41.659517   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:41.680671   16595 logs.go:276] 2 containers: [f33f4dc9668d 4c91655d93d0]
	I0610 04:33:41.680790   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:41.695675   16595 logs.go:276] 2 containers: [8e2785778b5c 7292316f71e4]
	I0610 04:33:41.695753   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:41.708179   16595 logs.go:276] 2 containers: [8024b091bd25 3c27685a0548]
	I0610 04:33:41.708256   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:41.719600   16595 logs.go:276] 2 containers: [fa75b931b71d ff491cc45707]
	I0610 04:33:41.719673   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:41.732361   16595 logs.go:276] 2 containers: [76e6288b1bad d0e6a07e77d4]
	I0610 04:33:41.732431   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:41.742602   16595 logs.go:276] 2 containers: [8baa13438b11 e670bbe5f487]
	I0610 04:33:41.742674   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:41.753284   16595 logs.go:276] 0 containers: []
	W0610 04:33:41.753300   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:41.753361   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:41.764453   16595 logs.go:276] 2 containers: [892f94f5f4d2 41c63d55e752]
	I0610 04:33:41.764469   16595 logs.go:123] Gathering logs for etcd [8e2785778b5c] ...
	I0610 04:33:41.764475   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2785778b5c"
	I0610 04:33:41.779746   16595 logs.go:123] Gathering logs for coredns [8024b091bd25] ...
	I0610 04:33:41.779757   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8024b091bd25"
	I0610 04:33:41.791452   16595 logs.go:123] Gathering logs for coredns [3c27685a0548] ...
	I0610 04:33:41.791467   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c27685a0548"
	I0610 04:33:41.802929   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:33:41.802940   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:41.814941   16595 logs.go:123] Gathering logs for kube-apiserver [f33f4dc9668d] ...
	I0610 04:33:41.814952   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f33f4dc9668d"
	I0610 04:33:41.828491   16595 logs.go:123] Gathering logs for kube-apiserver [4c91655d93d0] ...
	I0610 04:33:41.828501   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c91655d93d0"
	I0610 04:33:41.840980   16595 logs.go:123] Gathering logs for kube-controller-manager [e670bbe5f487] ...
	I0610 04:33:41.840991   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e670bbe5f487"
	I0610 04:33:41.852350   16595 logs.go:123] Gathering logs for storage-provisioner [892f94f5f4d2] ...
	I0610 04:33:41.852361   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 892f94f5f4d2"
	I0610 04:33:41.863581   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:41.863591   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:41.867849   16595 logs.go:123] Gathering logs for etcd [7292316f71e4] ...
	I0610 04:33:41.867858   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7292316f71e4"
	I0610 04:33:41.881087   16595 logs.go:123] Gathering logs for kube-scheduler [fa75b931b71d] ...
	I0610 04:33:41.881106   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75b931b71d"
	I0610 04:33:41.896493   16595 logs.go:123] Gathering logs for kube-proxy [76e6288b1bad] ...
	I0610 04:33:41.896504   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e6288b1bad"
	I0610 04:33:41.907863   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:41.907875   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:41.947766   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:41.947778   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:41.983103   16595 logs.go:123] Gathering logs for kube-scheduler [ff491cc45707] ...
	I0610 04:33:41.983115   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff491cc45707"
	I0610 04:33:41.994483   16595 logs.go:123] Gathering logs for kube-proxy [d0e6a07e77d4] ...
	I0610 04:33:41.994495   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0e6a07e77d4"
	I0610 04:33:42.005956   16595 logs.go:123] Gathering logs for kube-controller-manager [8baa13438b11] ...
	I0610 04:33:42.005972   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8baa13438b11"
	I0610 04:33:42.023890   16595 logs.go:123] Gathering logs for storage-provisioner [41c63d55e752] ...
	I0610 04:33:42.023906   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41c63d55e752"
	I0610 04:33:42.035196   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:42.035207   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:44.560077   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:49.562542   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:49.562577   16595 kubeadm.go:591] duration metric: took 4m7.855765333s to restartPrimaryControlPlane
	W0610 04:33:49.562606   16595 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 04:33:49.562620   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0610 04:33:50.614399   16595 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.051763334s)
	I0610 04:33:50.614476   16595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 04:33:50.620231   16595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 04:33:50.623008   16595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 04:33:50.625754   16595 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 04:33:50.625759   16595 kubeadm.go:156] found existing configuration files:
	
	I0610 04:33:50.625783   16595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/admin.conf
	I0610 04:33:50.628752   16595 kubeadm.go:162] "https://control-plane.minikube.internal:53086" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 04:33:50.628781   16595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 04:33:50.631554   16595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/kubelet.conf
	I0610 04:33:50.633950   16595 kubeadm.go:162] "https://control-plane.minikube.internal:53086" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 04:33:50.633969   16595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 04:33:50.636968   16595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/controller-manager.conf
	I0610 04:33:50.639689   16595 kubeadm.go:162] "https://control-plane.minikube.internal:53086" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 04:33:50.639712   16595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 04:33:50.642162   16595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/scheduler.conf
	I0610 04:33:50.645190   16595 kubeadm.go:162] "https://control-plane.minikube.internal:53086" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53086 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 04:33:50.645212   16595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 04:33:50.648251   16595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 04:33:50.665061   16595 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0610 04:33:50.665101   16595 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 04:33:50.731970   16595 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 04:33:50.732124   16595 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 04:33:50.732328   16595 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 04:33:50.789029   16595 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 04:33:50.793536   16595 out.go:204]   - Generating certificates and keys ...
	I0610 04:33:50.793575   16595 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 04:33:50.793618   16595 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 04:33:50.793682   16595 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 04:33:50.793800   16595 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 04:33:50.793853   16595 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 04:33:50.793892   16595 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 04:33:50.794001   16595 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 04:33:50.794049   16595 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 04:33:50.794112   16595 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 04:33:50.794176   16595 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 04:33:50.794211   16595 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 04:33:50.794280   16595 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 04:33:50.856986   16595 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 04:33:51.139728   16595 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 04:33:51.205428   16595 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 04:33:51.327067   16595 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 04:33:51.355539   16595 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 04:33:51.355947   16595 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 04:33:51.355968   16595 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 04:33:51.431089   16595 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 04:33:51.439254   16595 out.go:204]   - Booting up control plane ...
	I0610 04:33:51.439308   16595 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 04:33:51.439351   16595 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 04:33:51.439435   16595 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 04:33:51.439475   16595 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 04:33:51.439580   16595 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 04:33:55.934448   16595 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501789 seconds
	I0610 04:33:55.934503   16595 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 04:33:55.939288   16595 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 04:33:56.458176   16595 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 04:33:56.458543   16595 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-017000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 04:33:56.961949   16595 kubeadm.go:309] [bootstrap-token] Using token: 986k22.ybhtc5li94g26zyx
	I0610 04:33:56.968416   16595 out.go:204]   - Configuring RBAC rules ...
	I0610 04:33:56.968474   16595 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 04:33:56.968520   16595 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 04:33:56.972298   16595 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 04:33:56.973232   16595 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 04:33:56.974195   16595 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 04:33:56.975152   16595 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 04:33:56.978321   16595 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 04:33:57.137465   16595 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 04:33:57.366046   16595 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 04:33:57.366630   16595 kubeadm.go:309] 
	I0610 04:33:57.366659   16595 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 04:33:57.366662   16595 kubeadm.go:309] 
	I0610 04:33:57.366703   16595 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 04:33:57.366709   16595 kubeadm.go:309] 
	I0610 04:33:57.366720   16595 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 04:33:57.366746   16595 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 04:33:57.366768   16595 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 04:33:57.366771   16595 kubeadm.go:309] 
	I0610 04:33:57.366793   16595 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 04:33:57.366796   16595 kubeadm.go:309] 
	I0610 04:33:57.366816   16595 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 04:33:57.366840   16595 kubeadm.go:309] 
	I0610 04:33:57.366877   16595 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 04:33:57.366918   16595 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 04:33:57.366952   16595 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 04:33:57.366955   16595 kubeadm.go:309] 
	I0610 04:33:57.367001   16595 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 04:33:57.367052   16595 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 04:33:57.367059   16595 kubeadm.go:309] 
	I0610 04:33:57.367163   16595 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 986k22.ybhtc5li94g26zyx \
	I0610 04:33:57.367214   16595 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:56b5bf6ce93f42fffc51be5724cc4c4fa0c9b611b35ba669ffa5cef3ff8fcf22 \
	I0610 04:33:57.367233   16595 kubeadm.go:309] 	--control-plane 
	I0610 04:33:57.367236   16595 kubeadm.go:309] 
	I0610 04:33:57.367273   16595 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 04:33:57.367276   16595 kubeadm.go:309] 
	I0610 04:33:57.367312   16595 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 986k22.ybhtc5li94g26zyx \
	I0610 04:33:57.367416   16595 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:56b5bf6ce93f42fffc51be5724cc4c4fa0c9b611b35ba669ffa5cef3ff8fcf22 
	I0610 04:33:57.368136   16595 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 04:33:57.368146   16595 cni.go:84] Creating CNI manager for ""
	I0610 04:33:57.368154   16595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:33:57.371523   16595 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 04:33:57.378503   16595 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 04:33:57.381608   16595 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 04:33:57.386627   16595 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 04:33:57.386674   16595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 04:33:57.386704   16595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-017000 minikube.k8s.io/updated_at=2024_06_10T04_33_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c2b65c1940ca3bdd8a4d1a84aa1ecb6d007e0b42 minikube.k8s.io/name=running-upgrade-017000 minikube.k8s.io/primary=true
	I0610 04:33:57.430049   16595 kubeadm.go:1107] duration metric: took 43.4145ms to wait for elevateKubeSystemPrivileges
	I0610 04:33:57.430060   16595 ops.go:34] apiserver oom_adj: -16
	W0610 04:33:57.430081   16595 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 04:33:57.430085   16595 kubeadm.go:393] duration metric: took 4m15.737514417s to StartCluster
	I0610 04:33:57.430095   16595 settings.go:142] acquiring lock: {Name:mk6aafede331d0a23ef380eee9d6038b0fb4c41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:33:57.430183   16595 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:33:57.430604   16595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/kubeconfig: {Name:mke1ab156d45cd5cbace7e8cb5713141e8116718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:33:57.430787   16595 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:33:57.434439   16595 out.go:177] * Verifying Kubernetes components...
	I0610 04:33:57.430884   16595 config.go:182] Loaded profile config "running-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 04:33:57.430826   16595 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 04:33:57.441433   16595 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-017000"
	I0610 04:33:57.441450   16595 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-017000"
	W0610 04:33:57.441453   16595 addons.go:243] addon storage-provisioner should already be in state true
	I0610 04:33:57.441469   16595 host.go:66] Checking if "running-upgrade-017000" exists ...
	I0610 04:33:57.441494   16595 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-017000"
	I0610 04:33:57.441507   16595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-017000"
	I0610 04:33:57.441511   16595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:33:57.442571   16595 kapi.go:59] client config for running-upgrade-017000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/running-upgrade-017000/client.key", CAFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10585c460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 04:33:57.442696   16595 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-017000"
	W0610 04:33:57.442702   16595 addons.go:243] addon default-storageclass should already be in state true
	I0610 04:33:57.442709   16595 host.go:66] Checking if "running-upgrade-017000" exists ...
	I0610 04:33:57.446438   16595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:33:57.449521   16595 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 04:33:57.449528   16595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 04:33:57.449535   16595 sshutil.go:53] new ssh client: &{IP:localhost Port:53016 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/running-upgrade-017000/id_rsa Username:docker}
	I0610 04:33:57.450112   16595 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 04:33:57.450117   16595 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 04:33:57.450122   16595 sshutil.go:53] new ssh client: &{IP:localhost Port:53016 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/running-upgrade-017000/id_rsa Username:docker}
	I0610 04:33:57.541262   16595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 04:33:57.547958   16595 api_server.go:52] waiting for apiserver process to appear ...
	I0610 04:33:57.548003   16595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:33:57.551722   16595 api_server.go:72] duration metric: took 120.922417ms to wait for apiserver process to appear ...
	I0610 04:33:57.551731   16595 api_server.go:88] waiting for apiserver healthz status ...
	I0610 04:33:57.551738   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:57.595123   16595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 04:33:57.598394   16595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 04:34:02.553956   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:02.554007   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:07.554429   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:07.554454   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:12.554828   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:12.554861   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:17.555570   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:17.555626   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:22.556399   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:22.556422   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:27.557270   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:27.557305   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0610 04:34:27.920396   16595 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0610 04:34:27.928611   16595 out.go:177] * Enabled addons: storage-provisioner
	I0610 04:34:27.936596   16595 addons.go:510] duration metric: took 30.505560834s for enable addons: enabled=[storage-provisioner]
	I0610 04:34:32.558493   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:32.558593   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:37.560299   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:37.560327   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:42.562204   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:42.562273   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:47.564640   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:47.564660   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:52.566892   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:52.566944   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:57.569321   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:57.569451   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:34:57.588312   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:34:57.588378   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:34:57.599101   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:34:57.599174   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:34:57.613137   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:34:57.613212   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:34:57.623716   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:34:57.623794   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:34:57.633950   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:34:57.634020   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:34:57.644825   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:34:57.644897   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:34:57.655370   16595 logs.go:276] 0 containers: []
	W0610 04:34:57.655383   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:34:57.655445   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:34:57.666330   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:34:57.666343   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:34:57.666348   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:34:57.678586   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:34:57.678596   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:34:57.696838   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:34:57.696849   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:34:57.731280   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:34:57.731288   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:34:57.735535   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:34:57.735542   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:34:57.771672   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:34:57.771683   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:34:57.785518   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:34:57.785530   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:34:57.797897   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:34:57.797910   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:34:57.809406   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:34:57.809421   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:34:57.823204   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:34:57.823216   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:34:57.837308   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:34:57.837320   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:34:57.848793   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:34:57.848803   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:34:57.873204   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:34:57.873212   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:00.387036   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:05.389424   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:05.389584   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:05.404621   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:35:05.404698   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:05.416450   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:35:05.416518   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:05.427064   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:35:05.427126   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:05.437827   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:35:05.437895   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:05.448037   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:35:05.448106   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:05.458235   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:35:05.458301   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:05.468522   16595 logs.go:276] 0 containers: []
	W0610 04:35:05.468535   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:05.468589   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:05.479028   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:35:05.479042   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:35:05.479047   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:05.493445   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:05.493459   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:05.497873   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:05.497880   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:05.534236   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:35:05.534247   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:35:05.546229   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:35:05.546239   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:35:05.561815   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:05.561827   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:05.585621   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:35:05.585632   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:35:05.597646   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:35:05.597655   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:35:05.620255   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:05.620267   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:05.653949   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:35:05.653958   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:35:05.668331   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:35:05.668347   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:35:05.682260   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:35:05.682270   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:35:05.694344   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:35:05.694355   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:35:08.210838   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:13.213251   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:13.213350   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:13.225872   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:35:13.225950   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:13.240111   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:35:13.240185   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:13.259382   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:35:13.259454   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:13.270055   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:35:13.270125   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:13.284505   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:35:13.284575   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:13.295017   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:35:13.295080   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:13.304778   16595 logs.go:276] 0 containers: []
	W0610 04:35:13.304790   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:13.304844   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:13.315768   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:35:13.315784   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:13.315790   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:13.353792   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:35:13.353805   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:35:13.367919   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:35:13.367933   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:35:13.384828   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:13.384839   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:13.389470   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:35:13.389479   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:35:13.403518   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:35:13.403529   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:35:13.415099   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:35:13.415111   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:35:13.427087   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:35:13.427099   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:35:13.441163   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:35:13.441175   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:35:13.457428   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:35:13.457438   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:35:13.469320   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:13.469331   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:13.492022   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:13.492031   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:13.524756   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:35:13.524767   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:16.039059   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:21.041415   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:21.041608   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:21.056763   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:35:21.056852   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:21.069032   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:35:21.069098   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:21.080029   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:35:21.080103   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:21.090255   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:35:21.090330   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:21.100764   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:35:21.100839   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:21.113664   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:35:21.113730   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:21.124085   16595 logs.go:276] 0 containers: []
	W0610 04:35:21.124095   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:21.124157   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:21.134302   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:35:21.134317   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:21.134322   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:21.168392   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:21.168403   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:21.203398   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:35:21.203412   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:35:21.218204   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:35:21.218215   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:35:21.233357   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:35:21.233371   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:35:21.247740   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:35:21.247753   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:35:21.265468   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:35:21.265479   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:35:21.276866   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:21.276878   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:21.300622   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:35:21.300633   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:21.312787   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:21.312799   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:21.317094   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:35:21.317100   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:35:21.330608   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:35:21.330619   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:35:21.341893   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:35:21.341904   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:35:23.855014   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:28.857285   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:28.857516   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:28.872612   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:35:28.872683   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:28.884039   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:35:28.884107   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:28.894356   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:35:28.894422   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:28.905235   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:35:28.905303   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:28.915603   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:35:28.915666   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:28.926615   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:35:28.926679   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:28.936962   16595 logs.go:276] 0 containers: []
	W0610 04:35:28.936974   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:28.937031   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:28.946740   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:35:28.946754   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:35:28.946760   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:35:28.958409   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:35:28.958419   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:35:28.976488   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:35:28.976504   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:35:28.993334   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:35:28.993345   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:35:29.010338   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:35:29.010349   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:35:29.022378   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:29.022391   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:29.027375   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:29.027384   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:29.063882   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:35:29.063894   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:35:29.078805   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:35:29.078816   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:35:29.092510   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:35:29.092521   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:35:29.104530   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:29.104541   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:29.129045   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:35:29.129054   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:29.140672   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:29.140683   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:31.677132   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:36.679666   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:36.679860   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:36.705890   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:35:36.705988   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:36.722319   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:35:36.722399   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:36.736057   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:35:36.736154   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:36.747186   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:35:36.747249   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:36.757877   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:35:36.757942   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:36.768504   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:35:36.768570   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:36.778737   16595 logs.go:276] 0 containers: []
	W0610 04:35:36.778748   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:36.778808   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:36.790976   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:35:36.790990   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:35:36.790996   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:36.802429   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:36.802439   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:36.807373   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:35:36.807380   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:35:36.822012   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:35:36.822031   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:35:36.837326   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:35:36.837337   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:35:36.849119   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:35:36.849129   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:35:36.860561   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:35:36.860571   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:35:36.872782   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:35:36.872793   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:35:36.883833   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:36.883846   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:36.919377   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:36.919390   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:36.954065   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:35:36.954075   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:35:36.968602   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:35:36.968614   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:35:36.988108   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:36.988119   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:39.513485   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:44.515809   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:44.516015   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:44.533255   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:35:44.533337   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:44.548279   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:35:44.548351   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:44.559371   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:35:44.559432   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:44.569756   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:35:44.569824   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:44.580524   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:35:44.580598   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:44.591454   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:35:44.591520   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:44.601271   16595 logs.go:276] 0 containers: []
	W0610 04:35:44.601281   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:44.601332   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:44.611866   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:35:44.611881   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:35:44.611886   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:35:44.623354   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:35:44.623365   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:35:44.637499   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:35:44.637512   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:35:44.648900   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:35:44.648913   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:35:44.667062   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:44.667075   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:44.703705   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:44.703718   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:44.708706   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:35:44.708715   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:35:44.722120   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:35:44.722132   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:35:44.733769   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:44.733783   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:44.758675   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:35:44.758686   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:44.770454   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:44.770464   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:44.805991   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:35:44.806003   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:35:44.820421   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:35:44.820432   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:35:47.334133   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:52.336510   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:52.336699   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:52.354149   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:35:52.354238   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:52.367723   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:35:52.367796   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:52.378739   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:35:52.378808   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:52.388570   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:35:52.388641   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:52.399022   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:35:52.399087   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:52.409849   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:35:52.409917   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:52.420171   16595 logs.go:276] 0 containers: []
	W0610 04:35:52.420182   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:52.420237   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:52.430021   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:35:52.430041   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:35:52.430046   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:52.441785   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:52.441797   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:52.476817   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:52.476826   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:52.481095   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:35:52.481102   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:35:52.496593   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:35:52.496602   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:35:52.513380   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:35:52.513388   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:35:52.524352   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:35:52.524362   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:35:52.537730   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:52.537740   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:52.582080   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:35:52.582092   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:35:52.596787   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:35:52.596796   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:35:52.608476   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:35:52.608486   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:35:52.629478   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:35:52.629488   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:35:52.641155   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:52.641165   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:55.166206   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:00.167873   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:00.168038   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:00.186496   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:36:00.186591   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:00.201980   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:36:00.202041   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:00.213631   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:36:00.213698   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:00.225243   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:36:00.225304   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:00.235355   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:36:00.235416   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:00.245681   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:36:00.245748   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:00.255971   16595 logs.go:276] 0 containers: []
	W0610 04:36:00.255985   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:00.256036   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:00.266269   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:36:00.266291   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:00.266296   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:00.271452   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:36:00.271462   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:36:00.283489   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:36:00.283502   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:36:00.297708   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:00.297720   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:00.324030   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:36:00.324045   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:36:00.338566   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:36:00.338581   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:36:00.350430   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:36:00.350443   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:36:00.367481   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:36:00.367489   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:36:00.378717   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:00.378731   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:00.413237   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:00.413247   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:00.448019   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:36:00.448032   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:36:00.462413   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:36:00.462426   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:36:00.476235   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:36:00.476246   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:02.991487   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:07.993804   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:07.993975   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:08.009612   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:36:08.009697   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:08.022104   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:36:08.022178   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:08.033526   16595 logs.go:276] 2 containers: [66a1f23521e4 2dd6e65272f9]
	I0610 04:36:08.033586   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:08.043668   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:36:08.043736   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:08.053783   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:36:08.053851   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:08.063878   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:36:08.063936   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:08.074509   16595 logs.go:276] 0 containers: []
	W0610 04:36:08.074522   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:08.074569   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:08.085139   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:36:08.085156   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:36:08.085161   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:36:08.098912   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:36:08.098922   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:36:08.110844   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:36:08.110855   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:36:08.128180   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:08.128190   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:08.151949   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:36:08.151956   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:08.163927   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:08.163939   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:08.198566   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:36:08.198575   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:36:08.220614   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:36:08.220624   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:36:08.234206   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:36:08.234217   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:36:08.245958   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:36:08.245969   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:36:08.257449   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:08.257459   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:08.291764   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:08.291772   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:08.295837   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:36:08.295845   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:36:10.809660   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:15.812037   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:15.812171   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:15.828232   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:36:15.828315   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:15.844043   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:36:15.844110   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:15.858690   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:36:15.858761   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:15.871244   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:36:15.871308   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:15.882096   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:36:15.882163   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:15.892744   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:36:15.892833   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:15.903449   16595 logs.go:276] 0 containers: []
	W0610 04:36:15.903485   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:15.903540   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:15.914569   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:36:15.914585   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:15.914591   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:15.919146   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:36:15.919152   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:36:15.930982   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:36:15.930993   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:36:15.943420   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:15.943431   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:15.977702   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:36:15.977719   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:36:15.989341   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:15.989355   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:16.013603   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:16.013612   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:16.095124   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:36:16.095142   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:36:16.109539   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:36:16.109550   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:36:16.120992   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:36:16.121002   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:36:16.135040   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:36:16.135052   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:36:16.145829   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:36:16.145839   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:36:16.157197   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:36:16.157208   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:36:16.171637   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:36:16.171648   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:36:16.188964   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:36:16.188978   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:18.702571   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:23.705063   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:23.705365   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:23.739259   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:36:23.739379   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:23.756407   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:36:23.756495   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:23.769352   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:36:23.769433   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:23.780541   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:36:23.780616   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:23.792937   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:36:23.793004   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:23.803252   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:36:23.803316   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:23.813848   16595 logs.go:276] 0 containers: []
	W0610 04:36:23.813860   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:23.813917   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:23.824485   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:36:23.824502   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:23.824507   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:23.860856   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:23.860868   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:23.865454   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:36:23.865461   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:36:23.877467   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:36:23.877482   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:36:23.894366   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:23.894376   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:23.918525   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:36:23.918532   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:36:23.930657   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:36:23.930668   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:36:23.945245   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:36:23.945259   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:36:23.956977   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:36:23.956987   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:36:23.968709   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:36:23.968722   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:23.981534   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:36:23.981545   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:36:23.996815   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:36:23.996828   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:36:24.008335   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:36:24.008345   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:36:24.024342   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:36:24.024355   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:36:24.038890   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:24.038902   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:26.573883   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:31.575332   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:31.575516   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:31.597980   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:36:31.598055   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:31.611522   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:36:31.611611   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:31.623360   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:36:31.623430   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:31.634471   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:36:31.634536   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:31.645540   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:36:31.645610   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:31.656025   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:36:31.656085   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:31.666111   16595 logs.go:276] 0 containers: []
	W0610 04:36:31.666123   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:31.666179   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:31.676146   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:36:31.676163   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:36:31.676169   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:36:31.691001   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:36:31.691012   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:36:31.703159   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:31.703170   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:31.708262   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:36:31.708269   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:36:31.722980   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:36:31.722994   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:36:31.734645   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:36:31.734656   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:36:31.746489   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:31.746504   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:31.783620   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:36:31.783631   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:36:31.799779   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:36:31.799789   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:36:31.811813   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:31.811822   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:31.836808   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:36:31.836818   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:36:31.848580   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:36:31.848590   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:36:31.860199   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:36:31.860209   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:36:31.885093   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:31.885103   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:31.920058   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:36:31.920076   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:34.433273   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:39.435710   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:39.435898   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:39.453784   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:36:39.453868   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:39.471270   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:36:39.471339   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:39.482148   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:36:39.482229   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:39.492420   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:36:39.492488   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:39.502983   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:36:39.503049   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:39.514351   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:36:39.514417   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:39.524835   16595 logs.go:276] 0 containers: []
	W0610 04:36:39.524846   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:39.524899   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:39.535134   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:36:39.535154   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:36:39.535159   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:36:39.548993   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:36:39.549007   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:36:39.560437   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:39.560449   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:39.583861   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:36:39.583870   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:36:39.598079   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:36:39.598089   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:36:39.609668   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:36:39.609680   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:36:39.621248   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:36:39.621258   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:36:39.633008   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:39.633022   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:39.667785   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:39.667792   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:39.672503   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:36:39.672509   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:36:39.687245   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:36:39.687255   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:39.699063   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:39.699075   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:39.734978   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:36:39.734989   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:36:39.750843   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:36:39.750852   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:36:39.765566   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:36:39.765578   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:36:42.283568   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:47.286049   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:47.286233   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:47.300140   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:36:47.300201   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:47.311476   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:36:47.311547   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:47.322500   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:36:47.322568   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:47.339881   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:36:47.339949   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:47.349991   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:36:47.350059   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:47.360084   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:36:47.360145   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:47.371077   16595 logs.go:276] 0 containers: []
	W0610 04:36:47.371097   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:47.371147   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:47.383912   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:36:47.383930   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:36:47.383971   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:36:47.396091   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:36:47.396105   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:36:47.408840   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:47.408850   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:47.442542   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:47.442554   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:47.483118   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:36:47.483132   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:36:47.497521   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:36:47.497534   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:36:47.509282   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:36:47.509296   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:47.521067   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:47.521079   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:47.525464   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:36:47.525474   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:36:47.536943   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:36:47.536957   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:36:47.553339   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:36:47.553354   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:36:47.575034   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:36:47.575045   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:36:47.589364   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:36:47.589374   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:36:47.603013   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:36:47.603024   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:36:47.618774   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:47.618784   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:50.144437   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:55.146706   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:55.146875   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:55.166052   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:36:55.166141   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:55.180809   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:36:55.180880   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:55.192481   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:36:55.192553   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:55.203051   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:36:55.203122   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:55.213930   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:36:55.214000   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:55.224317   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:36:55.224383   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:55.234820   16595 logs.go:276] 0 containers: []
	W0610 04:36:55.234830   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:55.234884   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:55.245086   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:36:55.245104   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:36:55.245109   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:36:55.259243   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:36:55.259256   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:36:55.270949   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:55.270959   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:55.276030   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:36:55.276037   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:36:55.291633   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:36:55.291644   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:36:55.312665   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:36:55.312675   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:36:55.324807   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:36:55.324818   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:36:55.336069   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:36:55.336081   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:36:55.354378   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:36:55.354388   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:55.366530   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:55.366541   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:55.400410   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:55.400419   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:55.436451   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:36:55.436463   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:36:55.448308   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:36:55.448318   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:36:55.465243   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:36:55.465257   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:36:55.501356   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:55.501366   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:58.027614   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:03.029917   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:03.030061   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:03.043964   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:37:03.044039   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:03.054913   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:37:03.054987   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:03.065459   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:37:03.065538   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:03.075978   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:37:03.076046   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:03.086236   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:37:03.086295   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:03.096588   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:37:03.096649   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:03.106203   16595 logs.go:276] 0 containers: []
	W0610 04:37:03.106213   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:03.106262   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:03.116838   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:37:03.116856   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:37:03.116863   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:37:03.128378   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:37:03.128389   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:37:03.140179   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:37:03.140191   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:03.151338   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:03.151349   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:03.185934   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:37:03.185945   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:37:03.200110   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:37:03.200123   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:37:03.213900   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:37:03.213912   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:37:03.228157   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:37:03.228168   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:37:03.249669   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:37:03.249679   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:37:03.261215   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:03.261224   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:03.285387   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:03.285403   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:03.319846   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:03.319855   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:03.324978   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:37:03.324986   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:37:03.336595   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:37:03.336606   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:37:03.347990   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:37:03.348006   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:37:05.861587   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:10.864005   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:10.864128   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:10.877061   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:37:10.877140   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:10.888260   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:37:10.888330   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:10.899010   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:37:10.899077   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:10.910228   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:37:10.910293   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:10.929523   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:37:10.929587   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:10.940215   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:37:10.940285   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:10.951016   16595 logs.go:276] 0 containers: []
	W0610 04:37:10.951027   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:10.951081   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:10.965830   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:37:10.965846   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:10.965851   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:10.970180   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:10.970190   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:11.005459   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:37:11.005470   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:37:11.017253   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:37:11.017264   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:37:11.034386   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:37:11.034400   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:37:11.046632   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:11.046646   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:11.080268   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:37:11.080275   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:37:11.094773   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:37:11.094783   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:37:11.108213   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:37:11.108224   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:37:11.119740   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:37:11.119750   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:37:11.131522   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:37:11.131531   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:37:11.143254   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:11.143265   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:11.167836   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:37:11.167845   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:11.180049   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:37:11.180060   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:37:11.191871   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:37:11.191887   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:37:13.715423   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:18.716044   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:18.716336   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:18.733935   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:37:18.734015   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:18.746755   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:37:18.746830   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:18.757954   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:37:18.758025   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:18.768837   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:37:18.768901   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:18.779320   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:37:18.779388   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:18.791364   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:37:18.791432   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:18.801819   16595 logs.go:276] 0 containers: []
	W0610 04:37:18.801829   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:18.801880   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:18.812424   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:37:18.812441   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:37:18.812452   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:37:18.824759   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:18.824770   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:18.860201   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:37:18.860209   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:37:18.874183   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:37:18.874193   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:37:18.891103   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:37:18.891113   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:37:18.902590   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:37:18.902600   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:37:18.922178   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:37:18.922189   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:37:18.934642   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:37:18.934652   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:37:18.946278   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:37:18.946291   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:37:18.957463   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:18.957473   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:18.980447   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:37:18.980463   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:18.991967   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:18.991980   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:18.996670   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:37:18.996679   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:37:19.008309   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:19.008319   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:19.047183   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:37:19.047197   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:37:21.561675   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:26.564121   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:26.564302   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:26.587596   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:37:26.587708   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:26.603884   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:37:26.603959   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:26.621730   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:37:26.621791   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:26.632513   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:37:26.632579   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:26.643225   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:37:26.643288   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:26.654130   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:37:26.654192   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:26.664407   16595 logs.go:276] 0 containers: []
	W0610 04:37:26.664418   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:26.664474   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:26.675401   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:37:26.675417   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:37:26.675422   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:37:26.687367   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:37:26.687378   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:37:26.699323   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:37:26.699335   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:37:26.711496   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:37:26.711509   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:37:26.722975   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:37:26.722987   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:37:26.734341   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:37:26.734351   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:26.755126   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:37:26.755138   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:37:26.772928   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:26.772938   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:26.808836   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:26.808846   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:26.850821   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:37:26.850832   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:37:26.865502   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:37:26.865514   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:37:26.886287   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:26.886297   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:26.890621   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:37:26.890630   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:37:26.905408   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:37:26.905417   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:37:26.918459   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:26.918468   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:29.444983   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:34.447475   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:34.447784   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:34.477770   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:37:34.477881   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:34.493825   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:37:34.493911   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:34.506652   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:37:34.506727   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:34.529996   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:37:34.530067   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:34.544435   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:37:34.544504   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:34.557281   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:37:34.557354   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:34.567778   16595 logs.go:276] 0 containers: []
	W0610 04:37:34.567794   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:34.567856   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:34.577928   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:37:34.577945   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:37:34.577951   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:37:34.595081   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:37:34.595092   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:37:34.606614   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:34.606627   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:34.611241   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:37:34.611250   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:37:34.623942   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:37:34.623953   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:37:34.635241   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:37:34.635252   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:37:34.650614   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:37:34.650626   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:37:34.665024   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:37:34.665036   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:37:34.687488   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:37:34.687501   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:37:34.699346   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:37:34.699359   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:37:34.711322   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:37:34.711335   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:34.723427   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:34.723439   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:34.757634   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:34.757643   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:34.793316   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:37:34.793327   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:37:34.806365   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:34.806376   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:37.332899   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:42.335203   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:42.335463   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:42.361607   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:37:42.361721   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:42.379106   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:37:42.379188   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:42.392064   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:37:42.392144   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:42.403552   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:37:42.403619   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:42.413973   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:37:42.414038   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:42.424574   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:37:42.424634   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:42.435141   16595 logs.go:276] 0 containers: []
	W0610 04:37:42.435153   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:42.435208   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:42.445240   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:37:42.445255   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:42.445261   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:42.481327   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:37:42.481337   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:37:42.495708   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:37:42.495721   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:37:42.508607   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:42.508621   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:42.531617   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:42.531628   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:42.536452   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:37:42.536459   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:37:42.552813   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:37:42.552824   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:37:42.568146   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:37:42.568157   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:42.580784   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:37:42.580794   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:37:42.592904   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:37:42.592915   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:37:42.611867   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:37:42.611877   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:37:42.629583   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:42.629593   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:42.665078   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:37:42.665089   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:37:42.676190   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:37:42.676199   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:37:42.688071   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:37:42.688082   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:37:45.202178   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:50.204504   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:50.204719   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:50.224724   16595 logs.go:276] 1 containers: [d2e613d8e061]
	I0610 04:37:50.224823   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:50.240161   16595 logs.go:276] 1 containers: [c2c4254d1da3]
	I0610 04:37:50.240240   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:50.252477   16595 logs.go:276] 4 containers: [0781efcca856 a748ea7b9201 66a1f23521e4 2dd6e65272f9]
	I0610 04:37:50.252557   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:50.263235   16595 logs.go:276] 1 containers: [d7d0948d81df]
	I0610 04:37:50.263301   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:50.273700   16595 logs.go:276] 1 containers: [8decbb1b6056]
	I0610 04:37:50.273767   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:50.284248   16595 logs.go:276] 1 containers: [e73032c18017]
	I0610 04:37:50.284315   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:50.294452   16595 logs.go:276] 0 containers: []
	W0610 04:37:50.294463   16595 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:50.294517   16595 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:50.306967   16595 logs.go:276] 1 containers: [368c307a6bed]
	I0610 04:37:50.306983   16595 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:50.306988   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:50.343327   16595 logs.go:123] Gathering logs for kube-apiserver [d2e613d8e061] ...
	I0610 04:37:50.343340   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2e613d8e061"
	I0610 04:37:50.357823   16595 logs.go:123] Gathering logs for coredns [0781efcca856] ...
	I0610 04:37:50.357836   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0781efcca856"
	I0610 04:37:50.368903   16595 logs.go:123] Gathering logs for kube-proxy [8decbb1b6056] ...
	I0610 04:37:50.368914   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8decbb1b6056"
	I0610 04:37:50.380344   16595 logs.go:123] Gathering logs for kube-controller-manager [e73032c18017] ...
	I0610 04:37:50.380354   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e73032c18017"
	I0610 04:37:50.397487   16595 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:50.397497   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:50.401861   16595 logs.go:123] Gathering logs for coredns [a748ea7b9201] ...
	I0610 04:37:50.401868   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a748ea7b9201"
	I0610 04:37:50.413679   16595 logs.go:123] Gathering logs for coredns [66a1f23521e4] ...
	I0610 04:37:50.413690   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1f23521e4"
	I0610 04:37:50.425651   16595 logs.go:123] Gathering logs for kube-scheduler [d7d0948d81df] ...
	I0610 04:37:50.425666   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d0948d81df"
	I0610 04:37:50.439886   16595 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:50.439895   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:50.474671   16595 logs.go:123] Gathering logs for etcd [c2c4254d1da3] ...
	I0610 04:37:50.474682   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c4254d1da3"
	I0610 04:37:50.492463   16595 logs.go:123] Gathering logs for coredns [2dd6e65272f9] ...
	I0610 04:37:50.492473   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd6e65272f9"
	I0610 04:37:50.507118   16595 logs.go:123] Gathering logs for container status ...
	I0610 04:37:50.507129   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:50.518935   16595 logs.go:123] Gathering logs for storage-provisioner [368c307a6bed] ...
	I0610 04:37:50.518950   16595 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 368c307a6bed"
	I0610 04:37:50.537370   16595 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:50.537383   16595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:53.062613   16595 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:58.064696   16595 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:58.069940   16595 out.go:177] 
	W0610 04:37:58.074132   16595 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0610 04:37:58.074140   16595 out.go:239] * 
	* 
	W0610 04:37:58.074623   16595 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:37:58.090025   16595 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-017000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-06-10 04:37:58.173479 -0700 PDT m=+1321.864358751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-017000 -n running-upgrade-017000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-017000 -n running-upgrade-017000: exit status 2 (15.633273958s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-017000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-463000 sudo cat                            | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo cat                            | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo                                | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo                                | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo                                | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo cat                            | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo cat                            | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo                                | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo                                | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo                                | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo find                           | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-463000 sudo crio                           | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-463000                                     | cilium-463000             | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT | 10 Jun 24 04:27 PDT |
	| start   | -p kubernetes-upgrade-146000                         | kubernetes-upgrade-146000 | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-306000                             | offline-docker-306000     | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT | 10 Jun 24 04:27 PDT |
	| stop    | -p kubernetes-upgrade-146000                         | kubernetes-upgrade-146000 | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT | 10 Jun 24 04:27 PDT |
	| start   | -p stopped-upgrade-227000                            | minikube                  | jenkins | v1.26.0 | 10 Jun 24 04:27 PDT | 10 Jun 24 04:28 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146000                         | kubernetes-upgrade-146000 | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-146000                         | kubernetes-upgrade-146000 | jenkins | v1.33.1 | 10 Jun 24 04:27 PDT | 10 Jun 24 04:27 PDT |
	| start   | -p running-upgrade-017000                            | minikube                  | jenkins | v1.26.0 | 10 Jun 24 04:27 PDT | 10 Jun 24 04:29 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-227000 stop                          | minikube                  | jenkins | v1.26.0 | 10 Jun 24 04:28 PDT | 10 Jun 24 04:28 PDT |
	| start   | -p stopped-upgrade-227000                            | stopped-upgrade-227000    | jenkins | v1.33.1 | 10 Jun 24 04:28 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-017000                            | running-upgrade-017000    | jenkins | v1.33.1 | 10 Jun 24 04:29 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-227000                            | stopped-upgrade-227000    | jenkins | v1.33.1 | 10 Jun 24 04:38 PDT | 10 Jun 24 04:38 PDT |
	| start   | -p pause-029000 --memory=2048                        | pause-029000              | jenkins | v1.33.1 | 10 Jun 24 04:38 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 04:38:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 04:38:12.135527   16861 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:38:12.135657   16861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:38:12.135659   16861 out.go:304] Setting ErrFile to fd 2...
	I0610 04:38:12.135661   16861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:38:12.135792   16861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:38:12.136865   16861 out.go:298] Setting JSON to false
	I0610 04:38:12.154087   16861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9463,"bootTime":1718010029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:38:12.154148   16861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:38:12.160871   16861 out.go:177] * [pause-029000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:38:12.168802   16861 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:38:12.168818   16861 notify.go:220] Checking for updates...
	I0610 04:38:12.174816   16861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:38:12.177813   16861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:38:12.179055   16861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:38:12.181737   16861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:38:12.184777   16861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:38:12.188062   16861 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:38:12.188125   16861 config.go:182] Loaded profile config "running-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 04:38:12.188167   16861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:38:12.191717   16861 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:38:12.198775   16861 start.go:297] selected driver: qemu2
	I0610 04:38:12.198778   16861 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:38:12.198782   16861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:38:12.200908   16861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:38:12.203797   16861 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:38:12.206831   16861 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:38:12.206844   16861 cni.go:84] Creating CNI manager for ""
	I0610 04:38:12.206848   16861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:38:12.206851   16861 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:38:12.206872   16861 start.go:340] cluster config:
	{Name:pause-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:38:12.211162   16861 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:38:12.218770   16861 out.go:177] * Starting "pause-029000" primary control-plane node in "pause-029000" cluster
	I0610 04:38:12.222740   16861 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:38:12.222757   16861 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:38:12.222762   16861 cache.go:56] Caching tarball of preloaded images
	I0610 04:38:12.222808   16861 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:38:12.222811   16861 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:38:12.222861   16861 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/pause-029000/config.json ...
	I0610 04:38:12.222870   16861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/pause-029000/config.json: {Name:mk4918218d48ce84801a533f310d59802ecda37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:38:12.223165   16861 start.go:360] acquireMachinesLock for pause-029000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:38:12.223193   16861 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "pause-029000"
	I0610 04:38:12.223201   16861 start.go:93] Provisioning new machine with config: &{Name:pause-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:pause-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:38:12.223224   16861 start.go:125] createHost starting for "" (driver="qemu2")
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-06-10 11:28:37 UTC, ends at Mon 2024-06-10 11:38:13 UTC. --
	Jun 10 11:37:58 running-upgrade-017000 dockerd[4416]: time="2024-06-10T11:37:58.753386387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:37:58 running-upgrade-017000 dockerd[4416]: time="2024-06-10T11:37:58.753671053Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/06e1c6df016fae1f8f71c444d3970bcc8e01687292d6e89ed87f21469ff2914f pid=20589 runtime=io.containerd.runc.v2
	Jun 10 11:37:58 running-upgrade-017000 dockerd[4416]: time="2024-06-10T11:37:58.753762136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 11:37:58 running-upgrade-017000 dockerd[4416]: time="2024-06-10T11:37:58.753794178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 11:37:58 running-upgrade-017000 dockerd[4416]: time="2024-06-10T11:37:58.753820094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 11:37:58 running-upgrade-017000 dockerd[4416]: time="2024-06-10T11:37:58.753881761Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/63d2c82f24e812c3d9974e3972e1c7affbd88a855c9cf9269cf53400761bbd20 pid=20586 runtime=io.containerd.runc.v2
	Jun 10 11:37:59 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:37:59Z" level=error msg="ContainerStats resp: {0x4000761b80 linux}"
	Jun 10 11:38:00 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:00Z" level=error msg="ContainerStats resp: {0x40007d4bc0 linux}"
	Jun 10 11:38:00 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:00Z" level=error msg="ContainerStats resp: {0x40009d9c80 linux}"
	Jun 10 11:38:00 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:00Z" level=error msg="ContainerStats resp: {0x40007d5a80 linux}"
	Jun 10 11:38:00 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:00Z" level=error msg="ContainerStats resp: {0x40009d9f80 linux}"
	Jun 10 11:38:00 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:00Z" level=error msg="ContainerStats resp: {0x4000822c40 linux}"
	Jun 10 11:38:01 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 10 11:38:06 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:06Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 10 11:38:10 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:10Z" level=error msg="ContainerStats resp: {0x4000676d40 linux}"
	Jun 10 11:38:10 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:10Z" level=error msg="ContainerStats resp: {0x4000760340 linux}"
	Jun 10 11:38:11 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 10 11:38:11 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:11Z" level=error msg="ContainerStats resp: {0x40007d4c40 linux}"
	Jun 10 11:38:12 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:12Z" level=error msg="ContainerStats resp: {0x400087c400 linux}"
	Jun 10 11:38:12 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:12Z" level=error msg="ContainerStats resp: {0x400087c940 linux}"
	Jun 10 11:38:12 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:12Z" level=error msg="ContainerStats resp: {0x400087cdc0 linux}"
	Jun 10 11:38:12 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:12Z" level=error msg="ContainerStats resp: {0x400087d400 linux}"
	Jun 10 11:38:12 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:12Z" level=error msg="ContainerStats resp: {0x400087d800 linux}"
	Jun 10 11:38:12 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:12Z" level=error msg="ContainerStats resp: {0x400087c1c0 linux}"
	Jun 10 11:38:12 running-upgrade-017000 cri-dockerd[4135]: time="2024-06-10T11:38:12Z" level=error msg="ContainerStats resp: {0x400087c5c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	06e1c6df016fa       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   3673d1c9285cd
	63d2c82f24e81       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   53b977ead4518
	0781efcca8569       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   53b977ead4518
	a748ea7b92011       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3673d1c9285cd
	8decbb1b60562       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   3856929f90ace
	368c307a6bed7       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   99189aba19b5a
	d2e613d8e0619       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   f091bee4c64e5
	d7d0948d81dfe       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   44548a8a012d4
	e73032c18017d       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   225a1aeadc794
	c2c4254d1da31       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   209a5ba4256c1
	
	
	==> coredns [06e1c6df016f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3197280878311834633.2964591096811351605. HINFO: read udp 10.244.0.2:55869->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3197280878311834633.2964591096811351605. HINFO: read udp 10.244.0.2:51250->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3197280878311834633.2964591096811351605. HINFO: read udp 10.244.0.2:55251->10.0.2.3:53: i/o timeout
	
	
	==> coredns [0781efcca856] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 322326615948802901.7885892805790069483. HINFO: read udp 10.244.0.3:47249->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 322326615948802901.7885892805790069483. HINFO: read udp 10.244.0.3:45607->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 322326615948802901.7885892805790069483. HINFO: read udp 10.244.0.3:45814->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 322326615948802901.7885892805790069483. HINFO: read udp 10.244.0.3:33134->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [63d2c82f24e8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7466121068483913652.4500391472584332954. HINFO: read udp 10.244.0.3:37836->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7466121068483913652.4500391472584332954. HINFO: read udp 10.244.0.3:50166->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7466121068483913652.4500391472584332954. HINFO: read udp 10.244.0.3:41358->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a748ea7b9201] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 896606240680939015.2807149652143664896. HINFO: read udp 10.244.0.2:35784->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 896606240680939015.2807149652143664896. HINFO: read udp 10.244.0.2:56911->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 896606240680939015.2807149652143664896. HINFO: read udp 10.244.0.2:35653->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 896606240680939015.2807149652143664896. HINFO: read udp 10.244.0.2:50213->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 896606240680939015.2807149652143664896. HINFO: read udp 10.244.0.2:37989->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 896606240680939015.2807149652143664896. HINFO: read udp 10.244.0.2:55297->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 896606240680939015.2807149652143664896. HINFO: read udp 10.244.0.2:42820->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 896606240680939015.2807149652143664896. HINFO: read udp 10.244.0.2:58529->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-017000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-017000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c2b65c1940ca3bdd8a4d1a84aa1ecb6d007e0b42
	                    minikube.k8s.io/name=running-upgrade-017000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T04_33_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:33:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-017000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:38:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:33:57 +0000   Mon, 10 Jun 2024 11:33:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:33:57 +0000   Mon, 10 Jun 2024 11:33:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:33:57 +0000   Mon, 10 Jun 2024 11:33:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:33:57 +0000   Mon, 10 Jun 2024 11:33:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-017000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9ae594427044d808621d977355cba1a
	  System UUID:                a9ae594427044d808621d977355cba1a
	  Boot ID:                    1a2cc61b-ab4b-43b1-90e4-f8ab02376fda
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-rdn2r                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-vthc2                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-017000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-017000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-017000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-74bbx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-017000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-017000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-017000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-017000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-017000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-017000 event: Registered Node running-upgrade-017000 in Controller
	
	
	==> dmesg <==
	[  +0.069503] systemd-fstab-generator[883]: Ignoring "noauto" for root device
	[  +0.075563] systemd-fstab-generator[894]: Ignoring "noauto" for root device
	[  +1.141439] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.083356] systemd-fstab-generator[1043]: Ignoring "noauto" for root device
	[  +0.084546] systemd-fstab-generator[1054]: Ignoring "noauto" for root device
	[  +2.186011] systemd-fstab-generator[1282]: Ignoring "noauto" for root device
	[Jun10 11:29] systemd-fstab-generator[1926]: Ignoring "noauto" for root device
	[ +14.073597] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.354906] systemd-fstab-generator[2638]: Ignoring "noauto" for root device
	[  +0.172916] systemd-fstab-generator[2693]: Ignoring "noauto" for root device
	[  +0.101872] systemd-fstab-generator[2727]: Ignoring "noauto" for root device
	[  +0.114674] systemd-fstab-generator[2740]: Ignoring "noauto" for root device
	[  +5.257714] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.558048] systemd-fstab-generator[4092]: Ignoring "noauto" for root device
	[  +0.092968] systemd-fstab-generator[4103]: Ignoring "noauto" for root device
	[  +0.087949] systemd-fstab-generator[4114]: Ignoring "noauto" for root device
	[  +0.101769] systemd-fstab-generator[4128]: Ignoring "noauto" for root device
	[  +2.780425] systemd-fstab-generator[4402]: Ignoring "noauto" for root device
	[  +3.214594] systemd-fstab-generator[4777]: Ignoring "noauto" for root device
	[  +1.094228] systemd-fstab-generator[4902]: Ignoring "noauto" for root device
	[  +3.953854] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.085970] kauditd_printk_skb: 1 callbacks suppressed
	[Jun10 11:33] systemd-fstab-generator[13656]: Ignoring "noauto" for root device
	[  +5.638671] systemd-fstab-generator[14251]: Ignoring "noauto" for root device
	[  +0.467139] systemd-fstab-generator[14383]: Ignoring "noauto" for root device
	
	
	==> etcd [c2c4254d1da3] <==
	{"level":"info","ts":"2024-06-10T11:33:52.509Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T11:33:52.509Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-10T11:33:52.509Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-06-10T11:33:52.509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-06-10T11:33:52.509Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-06-10T11:33:52.509Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-06-10T11:33:52.509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-017000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:33:53.403Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:33:53.404Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:33:53.404Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:33:53.404Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:33:53.404Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:33:53.404Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T11:33:53.405Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-06-10T11:33:53.405Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:33:53.405Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:38:14 up 9 min,  0 users,  load average: 0.45, 0.34, 0.16
	Linux running-upgrade-017000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d2e613d8e061] <==
	I0610 11:33:54.614959       1 controller.go:611] quota admission added evaluator for: namespaces
	I0610 11:33:54.654979       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0610 11:33:54.654995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 11:33:54.656078       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 11:33:54.656118       1 cache.go:39] Caches are synced for autoregister controller
	I0610 11:33:54.656094       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0610 11:33:54.703215       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0610 11:33:55.405726       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 11:33:55.570168       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 11:33:55.575677       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 11:33:55.575711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 11:33:55.735050       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 11:33:55.747375       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 11:33:55.819548       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 11:33:55.822411       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0610 11:33:55.822862       1 controller.go:611] quota admission added evaluator for: endpoints
	I0610 11:33:55.824435       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 11:33:56.712399       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0610 11:33:57.091050       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0610 11:33:57.094514       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 11:33:57.119400       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0610 11:33:57.173279       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 11:34:10.166140       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0610 11:34:10.365954       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0610 11:34:11.380494       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e73032c18017] <==
	I0610 11:34:09.540319       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0610 11:34:09.544185       1 shared_informer.go:262] Caches are synced for crt configmap
	I0610 11:34:09.550444       1 shared_informer.go:262] Caches are synced for GC
	I0610 11:34:09.561258       1 shared_informer.go:262] Caches are synced for endpoint
	I0610 11:34:09.564079       1 shared_informer.go:262] Caches are synced for HPA
	I0610 11:34:09.564673       1 shared_informer.go:262] Caches are synced for cronjob
	I0610 11:34:09.564697       1 shared_informer.go:262] Caches are synced for deployment
	I0610 11:34:09.567335       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0610 11:34:09.618641       1 shared_informer.go:262] Caches are synced for service account
	I0610 11:34:09.642063       1 shared_informer.go:262] Caches are synced for PVC protection
	I0610 11:34:09.662133       1 shared_informer.go:262] Caches are synced for persistent volume
	I0610 11:34:09.664535       1 shared_informer.go:262] Caches are synced for stateful set
	I0610 11:34:09.666388       1 shared_informer.go:262] Caches are synced for namespace
	I0610 11:34:09.712349       1 shared_informer.go:262] Caches are synced for ephemeral
	I0610 11:34:09.715245       1 shared_informer.go:262] Caches are synced for expand
	I0610 11:34:09.719678       1 shared_informer.go:262] Caches are synced for resource quota
	I0610 11:34:09.725363       1 shared_informer.go:262] Caches are synced for attach detach
	I0610 11:34:09.768988       1 shared_informer.go:262] Caches are synced for resource quota
	I0610 11:34:10.169408       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-74bbx"
	I0610 11:34:10.181478       1 shared_informer.go:262] Caches are synced for garbage collector
	I0610 11:34:10.211954       1 shared_informer.go:262] Caches are synced for garbage collector
	I0610 11:34:10.211965       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0610 11:34:10.367255       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0610 11:34:10.568246       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vthc2"
	I0610 11:34:10.572100       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-rdn2r"
	
	
	==> kube-proxy [8decbb1b6056] <==
	I0610 11:34:11.314821       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0610 11:34:11.314936       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0610 11:34:11.314968       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0610 11:34:11.373714       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0610 11:34:11.373728       1 server_others.go:206] "Using iptables Proxier"
	I0610 11:34:11.373742       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0610 11:34:11.373842       1 server.go:661] "Version info" version="v1.24.1"
	I0610 11:34:11.373850       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:34:11.374961       1 config.go:317] "Starting service config controller"
	I0610 11:34:11.374972       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0610 11:34:11.374993       1 config.go:226] "Starting endpoint slice config controller"
	I0610 11:34:11.374996       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0610 11:34:11.376346       1 config.go:444] "Starting node config controller"
	I0610 11:34:11.376350       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0610 11:34:11.475298       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0610 11:34:11.475322       1 shared_informer.go:262] Caches are synced for service config
	I0610 11:34:11.476469       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [d7d0948d81df] <==
	W0610 11:33:54.613430       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 11:33:54.613436       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 11:33:54.613464       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 11:33:54.613472       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 11:33:54.613489       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 11:33:54.613504       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 11:33:54.613532       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 11:33:54.613539       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 11:33:54.613566       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 11:33:54.613570       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 11:33:54.613658       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 11:33:54.613679       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 11:33:54.614467       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 11:33:54.614801       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 11:33:54.614736       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 11:33:54.614826       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 11:33:54.614749       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 11:33:54.614894       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 11:33:54.614763       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 11:33:54.614953       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 11:33:54.614780       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 11:33:54.614975       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 11:33:55.439561       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 11:33:55.439777       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0610 11:33:56.211958       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-06-10 11:28:37 UTC, ends at Mon 2024-06-10 11:38:14 UTC. --
	Jun 10 11:34:09 running-upgrade-017000 kubelet[14257]: I0610 11:34:09.539818   14257 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 10 11:34:09 running-upgrade-017000 kubelet[14257]: I0610 11:34:09.540164   14257 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 10 11:34:09 running-upgrade-017000 kubelet[14257]: I0610 11:34:09.548476   14257 topology_manager.go:200] "Topology Admit Handler"
	Jun 10 11:34:09 running-upgrade-017000 kubelet[14257]: I0610 11:34:09.740483   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/687cd2df-b5f3-4e47-a8c9-827d6a7aa258-tmp\") pod \"storage-provisioner\" (UID: \"687cd2df-b5f3-4e47-a8c9-827d6a7aa258\") " pod="kube-system/storage-provisioner"
	Jun 10 11:34:09 running-upgrade-017000 kubelet[14257]: I0610 11:34:09.740505   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfq9d\" (UniqueName: \"kubernetes.io/projected/687cd2df-b5f3-4e47-a8c9-827d6a7aa258-kube-api-access-hfq9d\") pod \"storage-provisioner\" (UID: \"687cd2df-b5f3-4e47-a8c9-827d6a7aa258\") " pod="kube-system/storage-provisioner"
	Jun 10 11:34:09 running-upgrade-017000 kubelet[14257]: E0610 11:34:09.844736   14257 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jun 10 11:34:09 running-upgrade-017000 kubelet[14257]: E0610 11:34:09.844757   14257 projected.go:192] Error preparing data for projected volume kube-api-access-hfq9d for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jun 10 11:34:09 running-upgrade-017000 kubelet[14257]: E0610 11:34:09.844794   14257 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/687cd2df-b5f3-4e47-a8c9-827d6a7aa258-kube-api-access-hfq9d podName:687cd2df-b5f3-4e47-a8c9-827d6a7aa258 nodeName:}" failed. No retries permitted until 2024-06-10 11:34:10.344780039 +0000 UTC m=+13.265159292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hfq9d" (UniqueName: "kubernetes.io/projected/687cd2df-b5f3-4e47-a8c9-827d6a7aa258-kube-api-access-hfq9d") pod "storage-provisioner" (UID: "687cd2df-b5f3-4e47-a8c9-827d6a7aa258") : configmap "kube-root-ca.crt" not found
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.170363   14257 topology_manager.go:200] "Topology Admit Handler"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.242991   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a1ed737-cbad-4460-98d6-20c019df4807-kube-proxy\") pod \"kube-proxy-74bbx\" (UID: \"8a1ed737-cbad-4460-98d6-20c019df4807\") " pod="kube-system/kube-proxy-74bbx"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.243013   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a1ed737-cbad-4460-98d6-20c019df4807-xtables-lock\") pod \"kube-proxy-74bbx\" (UID: \"8a1ed737-cbad-4460-98d6-20c019df4807\") " pod="kube-system/kube-proxy-74bbx"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.243024   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a1ed737-cbad-4460-98d6-20c019df4807-lib-modules\") pod \"kube-proxy-74bbx\" (UID: \"8a1ed737-cbad-4460-98d6-20c019df4807\") " pod="kube-system/kube-proxy-74bbx"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.243065   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dw9\" (UniqueName: \"kubernetes.io/projected/8a1ed737-cbad-4460-98d6-20c019df4807-kube-api-access-x6dw9\") pod \"kube-proxy-74bbx\" (UID: \"8a1ed737-cbad-4460-98d6-20c019df4807\") " pod="kube-system/kube-proxy-74bbx"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: E0610 11:34:10.347106   14257 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: E0610 11:34:10.347141   14257 projected.go:192] Error preparing data for projected volume kube-api-access-x6dw9 for pod kube-system/kube-proxy-74bbx: configmap "kube-root-ca.crt" not found
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: E0610 11:34:10.347167   14257 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/8a1ed737-cbad-4460-98d6-20c019df4807-kube-api-access-x6dw9 podName:8a1ed737-cbad-4460-98d6-20c019df4807 nodeName:}" failed. No retries permitted until 2024-06-10 11:34:10.847157074 +0000 UTC m=+13.767536326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x6dw9" (UniqueName: "kubernetes.io/projected/8a1ed737-cbad-4460-98d6-20c019df4807-kube-api-access-x6dw9") pod "kube-proxy-74bbx" (UID: "8a1ed737-cbad-4460-98d6-20c019df4807") : configmap "kube-root-ca.crt" not found
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.572278   14257 topology_manager.go:200] "Topology Admit Handler"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.575615   14257 topology_manager.go:200] "Topology Admit Handler"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.747372   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndb4\" (UniqueName: \"kubernetes.io/projected/a607a527-734e-48bc-99f3-9ebdb4b78338-kube-api-access-tndb4\") pod \"coredns-6d4b75cb6d-rdn2r\" (UID: \"a607a527-734e-48bc-99f3-9ebdb4b78338\") " pod="kube-system/coredns-6d4b75cb6d-rdn2r"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.747408   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2275224a-5694-467e-bfd5-fdd6c2be5e16-config-volume\") pod \"coredns-6d4b75cb6d-vthc2\" (UID: \"2275224a-5694-467e-bfd5-fdd6c2be5e16\") " pod="kube-system/coredns-6d4b75cb6d-vthc2"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.747420   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh7cw\" (UniqueName: \"kubernetes.io/projected/2275224a-5694-467e-bfd5-fdd6c2be5e16-kube-api-access-lh7cw\") pod \"coredns-6d4b75cb6d-vthc2\" (UID: \"2275224a-5694-467e-bfd5-fdd6c2be5e16\") " pod="kube-system/coredns-6d4b75cb6d-vthc2"
	Jun 10 11:34:10 running-upgrade-017000 kubelet[14257]: I0610 11:34:10.747430   14257 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a607a527-734e-48bc-99f3-9ebdb4b78338-config-volume\") pod \"coredns-6d4b75cb6d-rdn2r\" (UID: \"a607a527-734e-48bc-99f3-9ebdb4b78338\") " pod="kube-system/coredns-6d4b75cb6d-rdn2r"
	Jun 10 11:34:11 running-upgrade-017000 kubelet[14257]: I0610 11:34:11.379392   14257 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3673d1c9285cd32e43e74c4a3c42e7fde84e025a423b3c1df785eca9330010fc"
	Jun 10 11:37:59 running-upgrade-017000 kubelet[14257]: I0610 11:37:59.663453   14257 scope.go:110] "RemoveContainer" containerID="66a1f23521e45bae6432771383562e4ee15248a1b802e05f338ed57b9781d5cb"
	Jun 10 11:37:59 running-upgrade-017000 kubelet[14257]: I0610 11:37:59.679031   14257 scope.go:110] "RemoveContainer" containerID="2dd6e65272f9bc87b9d475fc5b40a4faa46a3b714594a95105d3b7edfaf9f3b9"
	
	
	==> storage-provisioner [368c307a6bed] <==
	I0610 11:34:10.634842       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 11:34:10.639803       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 11:34:10.639822       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 11:34:10.642393       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 11:34:10.642542       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"630a0dc8-ff2f-4f5d-bace-1a203019612e", APIVersion:"v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-017000_6c722622-04d9-480d-9087-6361cc1632c0 became leader
	I0610 11:34:10.642560       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-017000_6c722622-04d9-480d-9087-6361cc1632c0!
	I0610 11:34:10.742818       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-017000_6c722622-04d9-480d-9087-6361cc1632c0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-017000 -n running-upgrade-017000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-017000 -n running-upgrade-017000: exit status 2 (15.711964375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-017000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-017000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-017000
--- FAIL: TestRunningBinaryUpgrade (636.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-146000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-146000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.757565458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-146000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-146000" primary control-plane node in "kubernetes-upgrade-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:27:34.499529   16468 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:27:34.499654   16468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:27:34.499657   16468 out.go:304] Setting ErrFile to fd 2...
	I0610 04:27:34.499659   16468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:27:34.499809   16468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:27:34.500835   16468 out.go:298] Setting JSON to false
	I0610 04:27:34.516761   16468 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8825,"bootTime":1718010029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:27:34.516825   16468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:27:34.522172   16468 out.go:177] * [kubernetes-upgrade-146000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:27:34.536057   16468 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:27:34.536132   16468 notify.go:220] Checking for updates...
	I0610 04:27:34.545100   16468 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:27:34.549047   16468 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:27:34.552055   16468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:27:34.555106   16468 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:27:34.558002   16468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:27:34.561458   16468 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:27:34.561528   16468 config.go:182] Loaded profile config "offline-docker-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:27:34.561574   16468 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:27:34.566082   16468 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:27:34.573191   16468 start.go:297] selected driver: qemu2
	I0610 04:27:34.573199   16468 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:27:34.573206   16468 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:27:34.575465   16468 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:27:34.579122   16468 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:27:34.582112   16468 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 04:27:34.582129   16468 cni.go:84] Creating CNI manager for ""
	I0610 04:27:34.582144   16468 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 04:27:34.582173   16468 start.go:340] cluster config:
	{Name:kubernetes-upgrade-146000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:27:34.587079   16468 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:27:34.593923   16468 out.go:177] * Starting "kubernetes-upgrade-146000" primary control-plane node in "kubernetes-upgrade-146000" cluster
	I0610 04:27:34.598019   16468 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:27:34.598038   16468 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 04:27:34.598047   16468 cache.go:56] Caching tarball of preloaded images
	I0610 04:27:34.598134   16468 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:27:34.598144   16468 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 04:27:34.598204   16468 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/kubernetes-upgrade-146000/config.json ...
	I0610 04:27:34.598215   16468 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/kubernetes-upgrade-146000/config.json: {Name:mkd5b591bdd084a99d60a854d4d60f1137a215e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:27:34.598539   16468 start.go:360] acquireMachinesLock for kubernetes-upgrade-146000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:27:34.598577   16468 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "kubernetes-upgrade-146000"
	I0610 04:27:34.598588   16468 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:27:34.598637   16468 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:27:34.606106   16468 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:27:34.624306   16468 start.go:159] libmachine.API.Create for "kubernetes-upgrade-146000" (driver="qemu2")
	I0610 04:27:34.624340   16468 client.go:168] LocalClient.Create starting
	I0610 04:27:34.624406   16468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:27:34.624437   16468 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:34.624451   16468 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:34.624513   16468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:27:34.624537   16468 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:34.624548   16468 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:34.624972   16468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:27:34.770597   16468 main.go:141] libmachine: Creating SSH key...
	I0610 04:27:34.832645   16468 main.go:141] libmachine: Creating Disk image...
	I0610 04:27:34.832650   16468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:27:34.832820   16468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2
	I0610 04:27:34.845300   16468 main.go:141] libmachine: STDOUT: 
	I0610 04:27:34.845320   16468 main.go:141] libmachine: STDERR: 
	I0610 04:27:34.845369   16468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2 +20000M
	I0610 04:27:34.856061   16468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:27:34.856077   16468 main.go:141] libmachine: STDERR: 
	I0610 04:27:34.856095   16468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2
	I0610 04:27:34.856102   16468 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:27:34.856146   16468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:48:e9:6e:3d:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2
	I0610 04:27:34.857838   16468 main.go:141] libmachine: STDOUT: 
	I0610 04:27:34.857852   16468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:27:34.857872   16468 client.go:171] duration metric: took 233.524708ms to LocalClient.Create
	I0610 04:27:36.860093   16468 start.go:128] duration metric: took 2.261422583s to createHost
	I0610 04:27:36.860151   16468 start.go:83] releasing machines lock for "kubernetes-upgrade-146000", held for 2.261548083s
	W0610 04:27:36.860209   16468 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:36.876404   16468 out.go:177] * Deleting "kubernetes-upgrade-146000" in qemu2 ...
	W0610 04:27:36.904097   16468 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:36.904124   16468 start.go:728] Will try again in 5 seconds ...
	I0610 04:27:41.904400   16468 start.go:360] acquireMachinesLock for kubernetes-upgrade-146000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:27:41.904490   16468 start.go:364] duration metric: took 68.292µs to acquireMachinesLock for "kubernetes-upgrade-146000"
	I0610 04:27:41.904519   16468 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:27:41.904568   16468 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:27:41.912036   16468 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:27:41.927250   16468 start.go:159] libmachine.API.Create for "kubernetes-upgrade-146000" (driver="qemu2")
	I0610 04:27:41.927278   16468 client.go:168] LocalClient.Create starting
	I0610 04:27:41.927325   16468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:27:41.927351   16468 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:41.927359   16468 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:41.927396   16468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:27:41.927411   16468 main.go:141] libmachine: Decoding PEM data...
	I0610 04:27:41.927421   16468 main.go:141] libmachine: Parsing certificate...
	I0610 04:27:41.928518   16468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:27:42.119120   16468 main.go:141] libmachine: Creating SSH key...
	I0610 04:27:42.155802   16468 main.go:141] libmachine: Creating Disk image...
	I0610 04:27:42.155807   16468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:27:42.155975   16468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2
	I0610 04:27:42.168500   16468 main.go:141] libmachine: STDOUT: 
	I0610 04:27:42.168518   16468 main.go:141] libmachine: STDERR: 
	I0610 04:27:42.168566   16468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2 +20000M
	I0610 04:27:42.179325   16468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:27:42.179338   16468 main.go:141] libmachine: STDERR: 
	I0610 04:27:42.179349   16468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2
	I0610 04:27:42.179354   16468 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:27:42.179395   16468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5a:60:56:dd:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2
	I0610 04:27:42.181109   16468 main.go:141] libmachine: STDOUT: 
	I0610 04:27:42.181126   16468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:27:42.181138   16468 client.go:171] duration metric: took 253.854416ms to LocalClient.Create
	I0610 04:27:44.183291   16468 start.go:128] duration metric: took 2.278686s to createHost
	I0610 04:27:44.183349   16468 start.go:83] releasing machines lock for "kubernetes-upgrade-146000", held for 2.278831708s
	W0610 04:27:44.183569   16468 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:44.199072   16468 out.go:177] 
	W0610 04:27:44.203046   16468 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:27:44.203080   16468 out.go:239] * 
	* 
	W0610 04:27:44.205714   16468 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:27:44.218995   16468 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-146000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-146000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-146000: (3.641772834s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-146000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-146000 status --format={{.Host}}: exit status 7 (66.588166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-146000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-146000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.215933083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-146000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-146000" primary control-plane node in "kubernetes-upgrade-146000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-146000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-146000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:27:47.971030   16522 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:27:47.971158   16522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:27:47.971162   16522 out.go:304] Setting ErrFile to fd 2...
	I0610 04:27:47.971164   16522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:27:47.971282   16522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:27:47.972307   16522 out.go:298] Setting JSON to false
	I0610 04:27:47.988807   16522 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8838,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:27:47.988878   16522 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:27:47.993572   16522 out.go:177] * [kubernetes-upgrade-146000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:27:48.005501   16522 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:27:48.000563   16522 notify.go:220] Checking for updates...
	I0610 04:27:48.016401   16522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:27:48.024408   16522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:27:48.032372   16522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:27:48.040401   16522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:27:48.048447   16522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:27:48.052713   16522 config.go:182] Loaded profile config "kubernetes-upgrade-146000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0610 04:27:48.052991   16522 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:27:48.056410   16522 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:27:48.064433   16522 start.go:297] selected driver: qemu2
	I0610 04:27:48.064438   16522 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:27:48.064511   16522 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:27:48.066811   16522 cni.go:84] Creating CNI manager for ""
	I0610 04:27:48.066829   16522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:27:48.066859   16522 start.go:340] cluster config:
	{Name:kubernetes-upgrade-146000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-146000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:27:48.071236   16522 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:27:48.079399   16522 out.go:177] * Starting "kubernetes-upgrade-146000" primary control-plane node in "kubernetes-upgrade-146000" cluster
	I0610 04:27:48.083442   16522 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:27:48.083460   16522 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:27:48.083475   16522 cache.go:56] Caching tarball of preloaded images
	I0610 04:27:48.083535   16522 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:27:48.083541   16522 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:27:48.083596   16522 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/kubernetes-upgrade-146000/config.json ...
	I0610 04:27:48.083972   16522 start.go:360] acquireMachinesLock for kubernetes-upgrade-146000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:27:48.084007   16522 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "kubernetes-upgrade-146000"
	I0610 04:27:48.084016   16522 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:27:48.084022   16522 fix.go:54] fixHost starting: 
	I0610 04:27:48.084139   16522 fix.go:112] recreateIfNeeded on kubernetes-upgrade-146000: state=Stopped err=<nil>
	W0610 04:27:48.084148   16522 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:27:48.088413   16522 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-146000" ...
	I0610 04:27:48.096327   16522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5a:60:56:dd:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2
	I0610 04:27:48.098234   16522 main.go:141] libmachine: STDOUT: 
	I0610 04:27:48.098253   16522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:27:48.098283   16522 fix.go:56] duration metric: took 14.26025ms for fixHost
	I0610 04:27:48.098288   16522 start.go:83] releasing machines lock for "kubernetes-upgrade-146000", held for 14.275833ms
	W0610 04:27:48.098295   16522 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:27:48.098325   16522 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:48.098330   16522 start.go:728] Will try again in 5 seconds ...
	I0610 04:27:53.098562   16522 start.go:360] acquireMachinesLock for kubernetes-upgrade-146000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:27:53.099124   16522 start.go:364] duration metric: took 392.541µs to acquireMachinesLock for "kubernetes-upgrade-146000"
	I0610 04:27:53.099309   16522 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:27:53.099332   16522 fix.go:54] fixHost starting: 
	I0610 04:27:53.100041   16522 fix.go:112] recreateIfNeeded on kubernetes-upgrade-146000: state=Stopped err=<nil>
	W0610 04:27:53.100071   16522 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:27:53.104732   16522 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-146000" ...
	I0610 04:27:53.109581   16522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5a:60:56:dd:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubernetes-upgrade-146000/disk.qcow2
	I0610 04:27:53.119429   16522 main.go:141] libmachine: STDOUT: 
	I0610 04:27:53.119493   16522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:27:53.119601   16522 fix.go:56] duration metric: took 20.269709ms for fixHost
	I0610 04:27:53.119621   16522 start.go:83] releasing machines lock for "kubernetes-upgrade-146000", held for 20.472542ms
	W0610 04:27:53.119860   16522 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-146000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-146000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:27:53.128586   16522 out.go:177] 
	W0610 04:27:53.132857   16522 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:27:53.132889   16522 out.go:239] * 
	* 
	W0610 04:27:53.135354   16522 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:27:53.145578   16522 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-146000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-146000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-146000 version --output=json: exit status 1 (62.936917ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-146000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-06-10 04:27:53.222638 -0700 PDT m=+716.917723668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-146000 -n kubernetes-upgrade-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-146000 -n kubernetes-upgrade-146000: exit status 7 (33.81075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-146000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-146000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-146000
--- FAIL: TestKubernetesUpgrade (18.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (593.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.760912882 start -p stopped-upgrade-227000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.760912882 start -p stopped-upgrade-227000 --memory=2200 --vm-driver=qemu2 : (59.011725833s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.760912882 -p stopped-upgrade-227000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.760912882 -p stopped-upgrade-227000 stop: (12.109604917s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-227000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-227000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.633750375s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-227000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-227000" primary control-plane node in "stopped-upgrade-227000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-227000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:28:55.231684   16583 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:28:55.232098   16583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:28:55.232603   16583 out.go:304] Setting ErrFile to fd 2...
	I0610 04:28:55.232612   16583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:28:55.233019   16583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:28:55.234486   16583 out.go:298] Setting JSON to false
	I0610 04:28:55.254033   16583 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8906,"bootTime":1718010029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:28:55.254104   16583 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:28:55.259514   16583 out.go:177] * [stopped-upgrade-227000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:28:55.267490   16583 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:28:55.271243   16583 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:28:55.267514   16583 notify.go:220] Checking for updates...
	I0610 04:28:55.274419   16583 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:28:55.277444   16583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:28:55.280420   16583 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:28:55.283424   16583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:28:55.286765   16583 config.go:182] Loaded profile config "stopped-upgrade-227000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 04:28:55.290436   16583 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 04:28:55.293379   16583 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:28:55.297393   16583 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:28:55.304375   16583 start.go:297] selected driver: qemu2
	I0610 04:28:55.304381   16583 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53011 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 04:28:55.304443   16583 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:28:55.306921   16583 cni.go:84] Creating CNI manager for ""
	I0610 04:28:55.306939   16583 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:28:55.306973   16583 start.go:340] cluster config:
	{Name:stopped-upgrade-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53011 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 04:28:55.307024   16583 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:28:55.314352   16583 out.go:177] * Starting "stopped-upgrade-227000" primary control-plane node in "stopped-upgrade-227000" cluster
	I0610 04:28:55.318412   16583 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 04:28:55.318428   16583 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0610 04:28:55.318438   16583 cache.go:56] Caching tarball of preloaded images
	I0610 04:28:55.318518   16583 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:28:55.318523   16583 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0610 04:28:55.318581   16583 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/config.json ...
	I0610 04:28:55.319070   16583 start.go:360] acquireMachinesLock for stopped-upgrade-227000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:28:55.319108   16583 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "stopped-upgrade-227000"
	I0610 04:28:55.319117   16583 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:28:55.319121   16583 fix.go:54] fixHost starting: 
	I0610 04:28:55.319240   16583 fix.go:112] recreateIfNeeded on stopped-upgrade-227000: state=Stopped err=<nil>
	W0610 04:28:55.319251   16583 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:28:55.322439   16583 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-227000" ...
	I0610 04:28:55.330513   16583 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52979-:22,hostfwd=tcp::52980-:2376,hostname=stopped-upgrade-227000 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/disk.qcow2
	I0610 04:28:55.379580   16583 main.go:141] libmachine: STDOUT: 
	I0610 04:28:55.379613   16583 main.go:141] libmachine: STDERR: 
	I0610 04:28:55.379618   16583 main.go:141] libmachine: Waiting for VM to start (ssh -p 52979 docker@127.0.0.1)...
	I0610 04:29:15.723824   16583 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/config.json ...
	I0610 04:29:15.724245   16583 machine.go:94] provisionDockerMachine start ...
	I0610 04:29:15.724338   16583 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:15.724565   16583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b56980] 0x104b591e0 <nil>  [] 0s} localhost 52979 <nil> <nil>}
	I0610 04:29:15.724577   16583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 04:29:15.796398   16583 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 04:29:15.796416   16583 buildroot.go:166] provisioning hostname "stopped-upgrade-227000"
	I0610 04:29:15.796494   16583 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:15.796635   16583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b56980] 0x104b591e0 <nil>  [] 0s} localhost 52979 <nil> <nil>}
	I0610 04:29:15.796645   16583 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-227000 && echo "stopped-upgrade-227000" | sudo tee /etc/hostname
	I0610 04:29:15.865359   16583 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-227000
	
	I0610 04:29:15.865418   16583 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:15.865545   16583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b56980] 0x104b591e0 <nil>  [] 0s} localhost 52979 <nil> <nil>}
	I0610 04:29:15.865553   16583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-227000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-227000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-227000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 04:29:15.931881   16583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 04:29:15.931893   16583 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19052-14289/.minikube CaCertPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19052-14289/.minikube}
	I0610 04:29:15.931904   16583 buildroot.go:174] setting up certificates
	I0610 04:29:15.931909   16583 provision.go:84] configureAuth start
	I0610 04:29:15.931913   16583 provision.go:143] copyHostCerts
	I0610 04:29:15.931997   16583 exec_runner.go:144] found /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.pem, removing ...
	I0610 04:29:15.932002   16583 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.pem
	I0610 04:29:15.932117   16583 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.pem (1082 bytes)
	I0610 04:29:15.932320   16583 exec_runner.go:144] found /Users/jenkins/minikube-integration/19052-14289/.minikube/cert.pem, removing ...
	I0610 04:29:15.932324   16583 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19052-14289/.minikube/cert.pem
	I0610 04:29:15.932366   16583 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19052-14289/.minikube/cert.pem (1123 bytes)
	I0610 04:29:15.932471   16583 exec_runner.go:144] found /Users/jenkins/minikube-integration/19052-14289/.minikube/key.pem, removing ...
	I0610 04:29:15.932474   16583 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19052-14289/.minikube/key.pem
	I0610 04:29:15.932512   16583 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19052-14289/.minikube/key.pem (1675 bytes)
	I0610 04:29:15.932595   16583 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-227000 san=[127.0.0.1 localhost minikube stopped-upgrade-227000]
	I0610 04:29:16.000421   16583 provision.go:177] copyRemoteCerts
	I0610 04:29:16.000472   16583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 04:29:16.000490   16583 sshutil.go:53] new ssh client: &{IP:localhost Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/id_rsa Username:docker}
	I0610 04:29:16.039634   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 04:29:16.046844   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 04:29:16.054306   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 04:29:16.061306   16583 provision.go:87] duration metric: took 129.383458ms to configureAuth
	I0610 04:29:16.061318   16583 buildroot.go:189] setting minikube options for container-runtime
	I0610 04:29:16.061432   16583 config.go:182] Loaded profile config "stopped-upgrade-227000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 04:29:16.061470   16583 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:16.061573   16583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b56980] 0x104b591e0 <nil>  [] 0s} localhost 52979 <nil> <nil>}
	I0610 04:29:16.061578   16583 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 04:29:16.125632   16583 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 04:29:16.125643   16583 buildroot.go:70] root file system type: tmpfs
	I0610 04:29:16.125702   16583 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 04:29:16.125767   16583 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:16.125903   16583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b56980] 0x104b591e0 <nil>  [] 0s} localhost 52979 <nil> <nil>}
	I0610 04:29:16.125938   16583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 04:29:16.194076   16583 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 04:29:16.194138   16583 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:16.194257   16583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b56980] 0x104b591e0 <nil>  [] 0s} localhost 52979 <nil> <nil>}
	I0610 04:29:16.194266   16583 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 04:29:16.571675   16583 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 04:29:16.571690   16583 machine.go:97] duration metric: took 847.431917ms to provisionDockerMachine
	I0610 04:29:16.571698   16583 start.go:293] postStartSetup for "stopped-upgrade-227000" (driver="qemu2")
	I0610 04:29:16.571704   16583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 04:29:16.571765   16583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 04:29:16.571775   16583 sshutil.go:53] new ssh client: &{IP:localhost Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/id_rsa Username:docker}
	I0610 04:29:16.608032   16583 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 04:29:16.609590   16583 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 04:29:16.609600   16583 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19052-14289/.minikube/addons for local assets ...
	I0610 04:29:16.609683   16583 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19052-14289/.minikube/files for local assets ...
	I0610 04:29:16.609786   16583 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/ssl/certs/147832.pem -> 147832.pem in /etc/ssl/certs
	I0610 04:29:16.609888   16583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 04:29:16.612754   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/ssl/certs/147832.pem --> /etc/ssl/certs/147832.pem (1708 bytes)
	I0610 04:29:16.619950   16583 start.go:296] duration metric: took 48.243292ms for postStartSetup
	I0610 04:29:16.619974   16583 fix.go:56] duration metric: took 21.300704459s for fixHost
	I0610 04:29:16.620036   16583 main.go:141] libmachine: Using SSH client type: native
	I0610 04:29:16.620188   16583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b56980] 0x104b591e0 <nil>  [] 0s} localhost 52979 <nil> <nil>}
	I0610 04:29:16.620195   16583 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 04:29:16.685793   16583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718018956.234016212
	
	I0610 04:29:16.685807   16583 fix.go:216] guest clock: 1718018956.234016212
	I0610 04:29:16.685811   16583 fix.go:229] Guest: 2024-06-10 04:29:16.234016212 -0700 PDT Remote: 2024-06-10 04:29:16.619977 -0700 PDT m=+21.417683751 (delta=-385.960788ms)
	I0610 04:29:16.685830   16583 fix.go:200] guest clock delta is within tolerance: -385.960788ms
	I0610 04:29:16.685833   16583 start.go:83] releasing machines lock for "stopped-upgrade-227000", held for 21.366571667s
	I0610 04:29:16.685924   16583 ssh_runner.go:195] Run: cat /version.json
	I0610 04:29:16.685934   16583 sshutil.go:53] new ssh client: &{IP:localhost Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/id_rsa Username:docker}
	I0610 04:29:16.685963   16583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 04:29:16.686001   16583 sshutil.go:53] new ssh client: &{IP:localhost Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/id_rsa Username:docker}
	W0610 04:29:16.686740   16583 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:53168->127.0.0.1:52979: write: broken pipe
	I0610 04:29:16.686761   16583 retry.go:31] will retry after 277.92338ms: ssh: handshake failed: write tcp 127.0.0.1:53168->127.0.0.1:52979: write: broken pipe
	W0610 04:29:16.720983   16583 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0610 04:29:16.721060   16583 ssh_runner.go:195] Run: systemctl --version
	I0610 04:29:16.722919   16583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 04:29:16.724868   16583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 04:29:16.724913   16583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0610 04:29:16.727958   16583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0610 04:29:16.733427   16583 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 04:29:16.733445   16583 start.go:494] detecting cgroup driver to use...
	I0610 04:29:16.733567   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 04:29:16.741271   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0610 04:29:16.744494   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 04:29:16.747845   16583 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 04:29:16.747892   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 04:29:16.751510   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 04:29:16.755165   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 04:29:16.758775   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 04:29:16.762248   16583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 04:29:16.765987   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 04:29:16.769169   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 04:29:16.772393   16583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 04:29:16.775569   16583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 04:29:16.778729   16583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 04:29:16.781241   16583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:16.849858   16583 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 04:29:16.857281   16583 start.go:494] detecting cgroup driver to use...
	I0610 04:29:16.857393   16583 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 04:29:16.863094   16583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 04:29:16.868708   16583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 04:29:16.883080   16583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 04:29:16.888291   16583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 04:29:16.893235   16583 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 04:29:16.955657   16583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 04:29:16.962003   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 04:29:16.970888   16583 ssh_runner.go:195] Run: which cri-dockerd
	I0610 04:29:16.972559   16583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 04:29:16.975644   16583 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 04:29:16.980419   16583 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 04:29:17.043685   16583 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 04:29:17.109125   16583 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 04:29:17.109197   16583 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 04:29:17.114481   16583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:17.175213   16583 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 04:29:18.313859   16583 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.138619375s)
	I0610 04:29:18.313921   16583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 04:29:18.318755   16583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 04:29:18.323488   16583 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 04:29:18.403399   16583 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 04:29:18.465839   16583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:18.534709   16583 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 04:29:18.542027   16583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 04:29:18.547187   16583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:18.615094   16583 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 04:29:18.655140   16583 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 04:29:18.655230   16583 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 04:29:18.657394   16583 start.go:562] Will wait 60s for crictl version
	I0610 04:29:18.657444   16583 ssh_runner.go:195] Run: which crictl
	I0610 04:29:18.658774   16583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 04:29:18.674662   16583 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0610 04:29:18.674729   16583 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 04:29:18.694389   16583 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 04:29:18.715222   16583 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0610 04:29:18.715287   16583 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0610 04:29:18.716714   16583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 04:29:18.720348   16583 kubeadm.go:877] updating cluster {Name:stopped-upgrade-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53011 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0610 04:29:18.720419   16583 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 04:29:18.720460   16583 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 04:29:18.731604   16583 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 04:29:18.731612   16583 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 04:29:18.731659   16583 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 04:29:18.735388   16583 ssh_runner.go:195] Run: which lz4
	I0610 04:29:18.736755   16583 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 04:29:18.737995   16583 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 04:29:18.738005   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0610 04:29:19.473131   16583 docker.go:649] duration metric: took 736.399334ms to copy over tarball
	I0610 04:29:19.473194   16583 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 04:29:20.659473   16583 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.186257625s)
	I0610 04:29:20.659487   16583 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 04:29:20.675146   16583 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 04:29:20.678421   16583 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0610 04:29:20.683302   16583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:20.763905   16583 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 04:29:22.621955   16583 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.85802125s)
	I0610 04:29:22.622045   16583 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 04:29:22.635755   16583 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 04:29:22.635764   16583 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 04:29:22.635769   16583 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 04:29:22.642898   16583 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:22.642939   16583 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:22.642992   16583 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:22.643003   16583 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:22.643043   16583 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:22.643059   16583 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:22.643089   16583 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:22.643176   16583 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 04:29:22.650732   16583 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:22.651650   16583 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 04:29:22.651775   16583 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:22.651775   16583 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:22.651814   16583 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:22.651852   16583 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:22.651898   16583 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:22.651925   16583 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:23.512180   16583 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:23.523233   16583 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0610 04:29:23.523260   16583 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:23.523313   16583 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0610 04:29:23.533088   16583 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0610 04:29:23.548066   16583 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 04:29:23.548198   16583 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:23.558059   16583 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0610 04:29:23.558086   16583 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:23.558139   16583 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0610 04:29:23.567966   16583 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 04:29:23.568074   16583 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0610 04:29:23.570567   16583 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0610 04:29:23.570585   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0610 04:29:23.582184   16583 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0610 04:29:23.585008   16583 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:23.603571   16583 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0610 04:29:23.603595   16583 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0610 04:29:23.603660   16583 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0610 04:29:23.619113   16583 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0610 04:29:23.619159   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0610 04:29:23.620359   16583 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0610 04:29:23.620378   16583 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:23.620439   16583 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0610 04:29:23.641471   16583 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 04:29:23.641599   16583 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0610 04:29:23.673464   16583 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 04:29:23.673577   16583 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:23.684324   16583 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0610 04:29:23.684403   16583 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0610 04:29:23.684425   16583 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0610 04:29:23.684444   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0610 04:29:23.688630   16583 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0610 04:29:23.688655   16583 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:23.688710   16583 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:29:23.689376   16583 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:23.695648   16583 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:23.704072   16583 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:23.707277   16583 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 04:29:23.707406   16583 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0610 04:29:23.710056   16583 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0610 04:29:23.710065   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0610 04:29:23.711299   16583 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0610 04:29:23.711318   16583 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:23.711371   16583 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 04:29:23.722640   16583 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0610 04:29:23.722660   16583 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:23.722716   16583 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0610 04:29:23.728232   16583 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0610 04:29:23.728265   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0610 04:29:23.728499   16583 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0610 04:29:23.728519   16583 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:23.728567   16583 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0610 04:29:23.772546   16583 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0610 04:29:23.772614   16583 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0610 04:29:23.772660   16583 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0610 04:29:23.772712   16583 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0610 04:29:23.772769   16583 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0610 04:29:23.779668   16583 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0610 04:29:23.779695   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0610 04:29:23.790690   16583 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0610 04:29:23.790705   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0610 04:29:24.216169   16583 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0610 04:29:24.216190   16583 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0610 04:29:24.216196   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0610 04:29:24.375812   16583 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0610 04:29:24.375850   16583 cache_images.go:92] duration metric: took 1.740061333s to LoadCachedImages
	W0610 04:29:24.375892   16583 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0610 04:29:24.375898   16583 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0610 04:29:24.375965   16583 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-227000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 04:29:24.376028   16583 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 04:29:24.389431   16583 cni.go:84] Creating CNI manager for ""
	I0610 04:29:24.389441   16583 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:29:24.389446   16583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 04:29:24.389453   16583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-227000 NodeName:stopped-upgrade-227000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 04:29:24.389519   16583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-227000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 04:29:24.389575   16583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0610 04:29:24.392298   16583 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 04:29:24.392334   16583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 04:29:24.395275   16583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0610 04:29:24.400655   16583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 04:29:24.405347   16583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0610 04:29:24.411502   16583 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0610 04:29:24.412782   16583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 04:29:24.416162   16583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:29:24.477609   16583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 04:29:24.486307   16583 certs.go:68] Setting up /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000 for IP: 10.0.2.15
	I0610 04:29:24.486319   16583 certs.go:194] generating shared ca certs ...
	I0610 04:29:24.486327   16583 certs.go:226] acquiring lock for ca certs: {Name:mk478b348d446dde3a95549bafcb3e70b2a1a766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:29:24.486577   16583 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.key
	I0610 04:29:24.486616   16583 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/proxy-client-ca.key
	I0610 04:29:24.486621   16583 certs.go:256] generating profile certs ...
	I0610 04:29:24.486720   16583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/client.key
	I0610 04:29:24.486735   16583 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.key.dc69dbc1
	I0610 04:29:24.486748   16583 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.crt.dc69dbc1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0610 04:29:24.611206   16583 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.crt.dc69dbc1 ...
	I0610 04:29:24.611220   16583 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.crt.dc69dbc1: {Name:mk5402e76dba99b2e6928c9bcda754433504401d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:29:24.611534   16583 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.key.dc69dbc1 ...
	I0610 04:29:24.611540   16583 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.key.dc69dbc1: {Name:mk7cecd5017b8da4b36c026bac1bbaaa008edd0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:29:24.611681   16583 certs.go:381] copying /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.crt.dc69dbc1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.crt
	I0610 04:29:24.611814   16583 certs.go:385] copying /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.key.dc69dbc1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.key
	I0610 04:29:24.611971   16583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/proxy-client.key
	I0610 04:29:24.612096   16583 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/14783.pem (1338 bytes)
	W0610 04:29:24.612120   16583 certs.go:480] ignoring /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/14783_empty.pem, impossibly tiny 0 bytes
	I0610 04:29:24.612125   16583 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 04:29:24.612146   16583 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem (1082 bytes)
	I0610 04:29:24.612164   16583 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem (1123 bytes)
	I0610 04:29:24.612181   16583 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/key.pem (1675 bytes)
	I0610 04:29:24.612218   16583 certs.go:484] found cert: /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/ssl/certs/147832.pem (1708 bytes)
	I0610 04:29:24.612521   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 04:29:24.619737   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 04:29:24.627526   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 04:29:24.635494   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 04:29:24.643733   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 04:29:24.651562   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 04:29:24.659168   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 04:29:24.666833   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 04:29:24.674540   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/ssl/certs/147832.pem --> /usr/share/ca-certificates/147832.pem (1708 bytes)
	I0610 04:29:24.682247   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 04:29:24.689265   16583 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/14783.pem --> /usr/share/ca-certificates/14783.pem (1338 bytes)
	I0610 04:29:24.696771   16583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 04:29:24.702572   16583 ssh_runner.go:195] Run: openssl version
	I0610 04:29:24.704660   16583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147832.pem && ln -fs /usr/share/ca-certificates/147832.pem /etc/ssl/certs/147832.pem"
	I0610 04:29:24.708371   16583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147832.pem
	I0610 04:29:24.709963   16583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 11:16 /usr/share/ca-certificates/147832.pem
	I0610 04:29:24.709991   16583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147832.pem
	I0610 04:29:24.711881   16583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147832.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 04:29:24.715449   16583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 04:29:24.719140   16583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 04:29:24.720724   16583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0610 04:29:24.720753   16583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 04:29:24.722720   16583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 04:29:24.726262   16583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14783.pem && ln -fs /usr/share/ca-certificates/14783.pem /etc/ssl/certs/14783.pem"
	I0610 04:29:24.729122   16583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14783.pem
	I0610 04:29:24.730464   16583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 11:16 /usr/share/ca-certificates/14783.pem
	I0610 04:29:24.730485   16583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14783.pem
	I0610 04:29:24.732227   16583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14783.pem /etc/ssl/certs/51391683.0"
	I0610 04:29:24.735141   16583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 04:29:24.736960   16583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 04:29:24.739488   16583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 04:29:24.742080   16583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 04:29:24.748677   16583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 04:29:24.751236   16583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 04:29:24.753867   16583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 04:29:24.756497   16583 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53011 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 04:29:24.756588   16583 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 04:29:24.769460   16583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 04:29:24.773669   16583 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 04:29:24.773679   16583 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 04:29:24.773682   16583 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 04:29:24.773736   16583 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 04:29:24.781185   16583 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 04:29:24.781244   16583 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-227000" does not appear in /Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:29:24.781262   16583 kubeconfig.go:62] /Users/jenkins/minikube-integration/19052-14289/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-227000" cluster setting kubeconfig missing "stopped-upgrade-227000" context setting]
	I0610 04:29:24.781427   16583 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/kubeconfig: {Name:mke1ab156d45cd5cbace7e8cb5713141e8116718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:29:24.782065   16583 kapi.go:59] client config for stopped-upgrade-227000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/client.key", CAFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ee4460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 04:29:24.782892   16583 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 04:29:24.786681   16583 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-227000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0610 04:29:24.786689   16583 kubeadm.go:1154] stopping kube-system containers ...
	I0610 04:29:24.786759   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 04:29:24.799588   16583 docker.go:483] Stopping containers: [0e7a95d293ba 1fe2347ffc09 882aa3940d87 ce4679d6a4bd b5467787bd30 1dbd81d082f2 1e6eecde536a fe06f8cb934b]
	I0610 04:29:24.799671   16583 ssh_runner.go:195] Run: docker stop 0e7a95d293ba 1fe2347ffc09 882aa3940d87 ce4679d6a4bd b5467787bd30 1dbd81d082f2 1e6eecde536a fe06f8cb934b
	I0610 04:29:24.811019   16583 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 04:29:24.817114   16583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 04:29:24.820447   16583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 04:29:24.820457   16583 kubeadm.go:156] found existing configuration files:
	
	I0610 04:29:24.820513   16583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/admin.conf
	I0610 04:29:24.823934   16583 kubeadm.go:162] "https://control-plane.minikube.internal:53011" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 04:29:24.823984   16583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 04:29:24.827219   16583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/kubelet.conf
	I0610 04:29:24.829944   16583 kubeadm.go:162] "https://control-plane.minikube.internal:53011" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 04:29:24.829974   16583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 04:29:24.832417   16583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/controller-manager.conf
	I0610 04:29:24.835028   16583 kubeadm.go:162] "https://control-plane.minikube.internal:53011" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 04:29:24.835050   16583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 04:29:24.837552   16583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/scheduler.conf
	I0610 04:29:24.840101   16583 kubeadm.go:162] "https://control-plane.minikube.internal:53011" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 04:29:24.840149   16583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 04:29:24.843501   16583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 04:29:24.847195   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:24.873206   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:25.310158   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:25.416990   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:25.440680   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 04:29:25.466932   16583 api_server.go:52] waiting for apiserver process to appear ...
	I0610 04:29:25.467008   16583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:25.969123   16583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:26.469089   16583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:29:26.473310   16583 api_server.go:72] duration metric: took 1.00637075s to wait for apiserver process to appear ...
	I0610 04:29:26.473319   16583 api_server.go:88] waiting for apiserver healthz status ...
	I0610 04:29:26.473328   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:29:31.475579   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:29:31.475656   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:29:36.476428   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:29:36.476461   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:29:41.476979   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:29:41.477000   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:29:46.477579   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:29:46.477603   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:29:51.478691   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:29:51.478713   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:29:56.479767   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:29:56.479806   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:01.481190   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:01.481229   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:06.483144   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:06.483184   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:11.484679   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:11.484750   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:16.487292   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:16.487321   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:21.489664   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:21.489738   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:26.492321   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:26.492599   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:30:26.509801   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:30:26.509905   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:30:26.523420   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:30:26.523494   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:30:26.535042   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:30:26.535116   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:30:26.545256   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:30:26.545327   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:30:26.555529   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:30:26.555619   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:30:26.566421   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:30:26.566490   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:30:26.578667   16583 logs.go:276] 0 containers: []
	W0610 04:30:26.578678   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:30:26.578740   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:30:26.588961   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:30:26.588979   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:30:26.588985   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:30:26.604680   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:30:26.604690   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:30:26.641951   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:30:26.641959   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:30:26.653724   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:30:26.653751   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:30:26.673820   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:30:26.673831   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:30:26.690658   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:30:26.690668   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:30:26.714628   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:30:26.714640   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:30:26.726608   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:30:26.726622   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:30:26.738629   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:30:26.738641   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:30:26.742830   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:30:26.742837   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:30:26.756634   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:30:26.756645   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:30:26.767942   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:30:26.767956   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:30:26.779529   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:30:26.779540   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:30:26.791408   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:30:26.791419   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:30:26.911818   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:30:26.911832   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:30:26.925308   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:30:26.925320   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:30:26.955529   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:30:26.955540   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:30:29.469851   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:34.472335   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:34.472564   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:30:34.494153   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:30:34.494258   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:30:34.506948   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:30:34.507021   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:30:34.519075   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:30:34.519141   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:30:34.533823   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:30:34.533902   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:30:34.543968   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:30:34.544036   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:30:34.554137   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:30:34.554209   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:30:34.564792   16583 logs.go:276] 0 containers: []
	W0610 04:30:34.564804   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:30:34.564858   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:30:34.575225   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:30:34.575251   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:30:34.575258   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:30:34.612386   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:30:34.612398   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:30:34.626652   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:30:34.626662   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:30:34.642056   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:30:34.642067   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:30:34.654615   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:30:34.654646   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:30:34.679441   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:30:34.679448   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:30:34.683209   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:30:34.683215   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:30:34.697495   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:30:34.697505   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:30:34.726511   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:30:34.726530   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:30:34.738146   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:30:34.738158   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:30:34.749756   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:30:34.749768   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:30:34.785975   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:30:34.785985   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:30:34.797500   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:30:34.797512   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:30:34.811816   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:30:34.811837   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:30:34.828447   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:30:34.828458   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:30:34.840522   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:30:34.840535   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:30:34.852906   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:30:34.852917   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:30:37.366803   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:42.369439   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:42.369813   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:30:42.403348   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:30:42.403504   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:30:42.423466   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:30:42.423596   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:30:42.438034   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:30:42.438126   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:30:42.450871   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:30:42.450946   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:30:42.462645   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:30:42.462723   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:30:42.473508   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:30:42.473575   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:30:42.484187   16583 logs.go:276] 0 containers: []
	W0610 04:30:42.484199   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:30:42.484261   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:30:42.495535   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:30:42.495563   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:30:42.495570   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:30:42.509971   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:30:42.509983   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:30:42.521680   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:30:42.521691   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:30:42.533154   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:30:42.533163   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:30:42.558140   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:30:42.558148   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:30:42.583527   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:30:42.583537   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:30:42.601597   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:30:42.601610   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:30:42.606261   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:30:42.606268   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:30:42.642181   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:30:42.642191   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:30:42.656518   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:30:42.656529   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:30:42.667616   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:30:42.667627   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:30:42.682297   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:30:42.682306   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:30:42.694420   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:30:42.694430   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:30:42.733203   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:30:42.733216   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:30:42.751504   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:30:42.751517   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:30:42.763201   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:30:42.763212   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:30:42.774271   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:30:42.774284   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:30:45.288480   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:50.290969   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:50.291118   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:30:50.306076   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:30:50.306153   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:30:50.318675   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:30:50.318737   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:30:50.329386   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:30:50.329455   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:30:50.339571   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:30:50.339642   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:30:50.353800   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:30:50.353860   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:30:50.364416   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:30:50.364485   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:30:50.375946   16583 logs.go:276] 0 containers: []
	W0610 04:30:50.375958   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:30:50.376014   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:30:50.386411   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:30:50.386435   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:30:50.386454   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:30:50.423203   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:30:50.423214   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:30:50.446354   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:30:50.446364   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:30:50.459560   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:30:50.459574   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:30:50.463900   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:30:50.463906   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:30:50.499734   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:30:50.499747   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:30:50.511116   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:30:50.511128   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:30:50.523686   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:30:50.523697   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:30:50.536981   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:30:50.536994   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:30:50.552240   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:30:50.552251   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:30:50.569443   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:30:50.569457   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:30:50.584672   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:30:50.584684   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:30:50.596620   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:30:50.596631   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:30:50.614324   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:30:50.614335   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:30:50.627930   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:30:50.627942   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:30:50.653454   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:30:50.653464   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:30:50.664998   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:30:50.665010   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:30:53.178355   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:30:58.181091   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:30:58.181552   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:30:58.221517   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:30:58.221665   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:30:58.243295   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:30:58.243421   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:30:58.258648   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:30:58.258721   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:30:58.271526   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:30:58.271606   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:30:58.282624   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:30:58.282700   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:30:58.301978   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:30:58.302053   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:30:58.312824   16583 logs.go:276] 0 containers: []
	W0610 04:30:58.312835   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:30:58.312896   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:30:58.323576   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:30:58.323594   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:30:58.323600   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:30:58.338248   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:30:58.338261   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:30:58.350544   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:30:58.350559   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:30:58.386420   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:30:58.386435   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:30:58.400319   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:30:58.400333   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:30:58.412660   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:30:58.412671   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:30:58.429644   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:30:58.429656   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:30:58.441059   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:30:58.441072   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:30:58.479687   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:30:58.479708   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:30:58.483840   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:30:58.483848   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:30:58.509715   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:30:58.509724   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:30:58.523901   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:30:58.523911   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:30:58.536074   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:30:58.536085   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:30:58.547764   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:30:58.547776   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:30:58.571234   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:30:58.571242   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:30:58.590579   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:30:58.590590   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:30:58.602805   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:30:58.602818   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:31:01.117248   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:06.119655   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:06.119829   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:06.140079   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:31:06.140183   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:06.154549   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:31:06.154636   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:06.166565   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:31:06.166636   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:06.177633   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:31:06.177711   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:06.187911   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:31:06.187977   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:06.198256   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:31:06.198324   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:06.208700   16583 logs.go:276] 0 containers: []
	W0610 04:31:06.208712   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:06.208768   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:06.218747   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:31:06.218768   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:06.218774   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:06.242018   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:31:06.242025   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:06.253425   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:06.253435   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:06.257442   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:31:06.257451   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:31:06.271293   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:31:06.271303   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:31:06.282378   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:31:06.282388   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:31:06.299414   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:31:06.299426   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:31:06.311523   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:06.311534   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:06.351542   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:31:06.351553   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:31:06.367885   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:31:06.367895   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:31:06.382932   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:31:06.382942   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:31:06.408018   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:31:06.408029   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:31:06.420484   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:06.420495   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:06.457291   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:31:06.457306   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:31:06.472137   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:31:06.472150   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:31:06.483996   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:31:06.484008   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:31:06.495933   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:31:06.495944   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:31:09.008496   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:14.010878   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:14.011151   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:14.035549   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:31:14.035673   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:14.059886   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:31:14.059955   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:14.071498   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:31:14.071559   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:14.084686   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:31:14.084751   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:14.095225   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:31:14.095298   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:14.105538   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:31:14.105603   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:14.116234   16583 logs.go:276] 0 containers: []
	W0610 04:31:14.116245   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:14.116299   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:14.126392   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:31:14.126412   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:31:14.126419   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:31:14.151326   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:31:14.151336   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:31:14.174430   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:14.174439   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:14.198958   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:14.198967   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:14.202862   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:14.202871   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:14.238067   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:31:14.238081   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:31:14.252133   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:31:14.252144   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:31:14.267250   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:31:14.267262   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:31:14.283414   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:31:14.283425   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:31:14.295264   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:31:14.295278   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:31:14.306957   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:14.306970   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:14.343641   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:31:14.343651   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:31:14.361889   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:31:14.361899   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:31:14.379469   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:31:14.379497   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:31:14.390469   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:31:14.390481   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:14.402238   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:31:14.402250   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:31:14.417888   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:31:14.417900   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:31:16.931227   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:21.933632   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:21.933833   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:21.953688   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:31:21.953782   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:21.968341   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:31:21.968420   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:21.980015   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:31:21.980087   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:21.999616   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:31:21.999693   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:22.010093   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:31:22.010165   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:22.024982   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:31:22.025057   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:22.035182   16583 logs.go:276] 0 containers: []
	W0610 04:31:22.035194   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:22.035252   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:22.045384   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:31:22.045402   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:31:22.045408   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:31:22.059880   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:31:22.059890   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:31:22.077660   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:31:22.077672   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:31:22.090290   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:31:22.090302   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:31:22.104941   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:31:22.104956   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:22.117065   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:31:22.117076   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:31:22.131144   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:31:22.131155   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:31:22.145373   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:31:22.145383   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:31:22.156941   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:31:22.156952   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:31:22.184771   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:31:22.184781   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:31:22.198483   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:31:22.198494   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:31:22.210165   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:31:22.210175   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:31:22.225803   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:22.225815   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:22.250663   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:22.250670   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:22.289122   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:22.289137   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:22.294121   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:22.294141   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:22.331667   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:31:22.331677   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:31:24.843798   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:29.846165   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:29.846374   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:29.864104   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:31:29.864195   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:29.877234   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:31:29.877307   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:29.888267   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:31:29.888346   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:29.902451   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:31:29.902521   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:29.913207   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:31:29.913284   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:29.923585   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:31:29.923646   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:29.933887   16583 logs.go:276] 0 containers: []
	W0610 04:31:29.933902   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:29.933955   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:29.944030   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:31:29.944047   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:31:29.944052   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:31:29.959404   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:31:29.959415   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:31:29.973623   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:31:29.973635   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:31:29.985287   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:31:29.985298   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:29.997257   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:29.997266   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:30.034707   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:31:30.034714   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:31:30.059232   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:31:30.059242   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:31:30.071167   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:31:30.071177   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:31:30.084534   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:30.084544   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:30.089004   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:31:30.089010   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:31:30.103718   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:31:30.103731   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:31:30.121029   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:31:30.121040   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:31:30.134828   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:31:30.134838   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:31:30.148577   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:31:30.148590   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:31:30.164386   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:30.164397   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:30.195783   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:30.195793   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:30.230437   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:31:30.230450   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:31:32.744227   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:37.745351   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:37.745518   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:37.763926   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:31:37.764020   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:37.777394   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:31:37.777474   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:37.792604   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:31:37.792682   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:37.802762   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:31:37.802835   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:37.816571   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:31:37.816642   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:37.827307   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:31:37.827376   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:37.837484   16583 logs.go:276] 0 containers: []
	W0610 04:31:37.837494   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:37.837546   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:37.847938   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:31:37.847960   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:31:37.847966   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:31:37.859306   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:31:37.859318   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:31:37.873207   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:31:37.873222   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:31:37.887653   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:31:37.887664   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:31:37.898812   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:37.898823   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:37.923052   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:37.923059   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:37.927351   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:31:37.927357   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:31:37.951930   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:31:37.951940   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:31:37.963659   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:31:37.963669   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:31:37.981332   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:31:37.981343   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:37.993394   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:31:37.993404   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:31:38.007254   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:31:38.007264   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:31:38.019882   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:31:38.019894   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:31:38.033944   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:31:38.033955   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:31:38.045300   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:31:38.045311   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:31:38.057002   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:38.057013   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:38.094080   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:38.094091   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:40.631222   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:45.633599   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:45.633821   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:45.653113   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:31:45.653207   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:45.671406   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:31:45.671481   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:45.688177   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:31:45.688254   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:45.698544   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:31:45.698603   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:45.708854   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:31:45.708926   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:45.721028   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:31:45.721100   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:45.732211   16583 logs.go:276] 0 containers: []
	W0610 04:31:45.732224   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:45.732279   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:45.745765   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:31:45.745783   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:31:45.745792   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:31:45.760167   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:45.760176   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:45.782913   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:31:45.782922   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:31:45.795152   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:31:45.795164   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:45.806774   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:45.806786   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:45.811345   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:45.811355   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:45.855791   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:31:45.855801   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:31:45.880208   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:31:45.880222   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:31:45.891625   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:31:45.891636   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:31:45.905092   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:31:45.905101   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:31:45.919331   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:31:45.919340   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:31:45.931035   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:31:45.931044   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:31:45.943240   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:31:45.943251   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:31:45.954280   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:45.954290   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:45.992805   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:31:45.992815   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:31:46.006646   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:31:46.006656   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:31:46.018500   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:31:46.018511   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:31:48.538423   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:31:53.540961   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:31:53.541233   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:31:53.562986   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:31:53.563088   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:31:53.579346   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:31:53.579431   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:31:53.591731   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:31:53.591801   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:31:53.603566   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:31:53.603642   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:31:53.618221   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:31:53.618290   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:31:53.628656   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:31:53.628728   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:31:53.638762   16583 logs.go:276] 0 containers: []
	W0610 04:31:53.638776   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:31:53.638837   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:31:53.649464   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:31:53.649480   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:31:53.649486   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:31:53.686768   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:31:53.686777   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:31:53.725111   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:31:53.725127   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:31:53.752098   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:31:53.752116   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:31:53.770934   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:31:53.770949   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:31:53.783448   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:31:53.783462   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:31:53.798122   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:31:53.798137   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:31:53.811890   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:31:53.811906   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:31:53.826866   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:31:53.826879   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:31:53.838335   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:31:53.838349   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:31:53.850203   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:31:53.850217   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:31:53.854849   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:31:53.854858   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:31:53.869279   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:31:53.869289   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:31:53.880464   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:31:53.880476   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:31:53.891504   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:31:53.891514   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:31:53.902725   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:31:53.902740   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:31:53.915207   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:31:53.915217   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:31:56.440084   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:01.442829   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:01.443212   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:01.491395   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:32:01.491537   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:01.510687   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:32:01.510776   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:01.524597   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:32:01.524675   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:01.539890   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:32:01.539986   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:01.550999   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:32:01.551066   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:01.561846   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:32:01.561919   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:01.575434   16583 logs.go:276] 0 containers: []
	W0610 04:32:01.575443   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:01.575496   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:01.585814   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:32:01.585831   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:32:01.585836   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:32:01.604424   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:32:01.604435   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:32:01.616035   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:32:01.616047   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:32:01.627673   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:32:01.627683   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:01.639767   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:32:01.639779   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:32:01.653718   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:32:01.653728   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:32:01.681706   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:32:01.681716   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:32:01.695427   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:01.695437   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:01.732596   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:01.732606   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:01.736420   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:32:01.736429   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:32:01.751117   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:32:01.751126   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:32:01.767177   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:01.767202   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:01.790760   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:32:01.790775   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:32:01.807403   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:32:01.807413   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:32:01.819281   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:32:01.819294   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:32:01.837714   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:01.837725   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:01.875755   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:32:01.875767   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:32:04.389774   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:09.392679   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:09.393112   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:09.431695   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:32:09.431848   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:09.453643   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:32:09.453769   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:09.470834   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:32:09.470914   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:09.483111   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:32:09.483178   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:09.495619   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:32:09.495696   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:09.506787   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:32:09.506855   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:09.517196   16583 logs.go:276] 0 containers: []
	W0610 04:32:09.517209   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:09.517278   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:09.533177   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:32:09.533198   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:32:09.533203   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:32:09.558280   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:32:09.558290   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:32:09.573380   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:32:09.573390   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:32:09.584862   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:32:09.584873   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:32:09.597026   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:32:09.597039   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:32:09.608767   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:32:09.608778   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:09.620317   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:09.620330   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:09.655350   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:32:09.655359   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:32:09.670041   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:32:09.670054   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:32:09.681567   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:32:09.681579   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:32:09.699148   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:09.699162   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:09.724134   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:09.724142   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:09.762667   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:09.762675   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:09.766546   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:32:09.766555   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:32:09.780673   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:32:09.780684   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:32:09.794845   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:32:09.794855   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:32:09.809923   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:32:09.809934   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:32:12.324503   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:17.327021   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:17.327270   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:17.356825   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:32:17.356944   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:17.373295   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:32:17.373378   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:17.386000   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:32:17.386070   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:17.397352   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:32:17.397419   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:17.408822   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:32:17.408893   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:17.419143   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:32:17.419194   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:17.429263   16583 logs.go:276] 0 containers: []
	W0610 04:32:17.429274   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:17.429329   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:17.439844   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:32:17.439865   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:32:17.439870   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:32:17.453621   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:32:17.453631   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:32:17.471498   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:17.471510   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:17.495977   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:32:17.495996   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:32:17.507639   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:17.507651   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:17.543650   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:32:17.543664   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:32:17.568978   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:32:17.568990   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:32:17.579811   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:32:17.579823   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:32:17.594817   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:32:17.594828   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:32:17.607160   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:17.607171   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:17.645449   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:17.645457   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:17.649428   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:32:17.649434   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:32:17.672094   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:32:17.672104   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:32:17.686565   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:32:17.686578   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:17.698755   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:32:17.698764   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:32:17.718096   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:32:17.718108   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:32:17.729807   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:32:17.729817   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:32:20.240971   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:25.242827   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:25.243049   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:25.274379   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:32:25.274501   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:25.288799   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:32:25.288886   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:25.300613   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:32:25.300685   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:25.311604   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:32:25.311678   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:25.322470   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:32:25.322540   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:25.332840   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:32:25.332904   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:25.347488   16583 logs.go:276] 0 containers: []
	W0610 04:32:25.347500   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:25.347561   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:25.358012   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:32:25.358031   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:25.358038   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:25.393791   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:32:25.393799   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:32:25.407687   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:32:25.407697   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:32:25.419201   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:25.419211   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:25.457188   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:32:25.457200   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:32:25.473698   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:32:25.473710   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:32:25.485706   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:32:25.485718   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:32:25.504115   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:32:25.504129   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:32:25.517940   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:25.517954   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:25.541547   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:32:25.541558   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:32:25.553992   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:32:25.554002   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:32:25.565606   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:25.565616   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:25.569941   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:32:25.569947   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:32:25.583910   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:32:25.583922   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:32:25.609655   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:32:25.609666   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:32:25.624497   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:32:25.624507   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:32:25.639309   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:32:25.639320   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:28.154065   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:33.156517   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:33.156648   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:33.170211   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:32:33.170285   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:33.181121   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:32:33.181191   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:33.191947   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:32:33.192012   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:33.202662   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:32:33.202733   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:33.212714   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:32:33.212788   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:33.223636   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:32:33.223709   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:33.234393   16583 logs.go:276] 0 containers: []
	W0610 04:32:33.234405   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:33.234462   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:33.245343   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:32:33.245361   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:33.245367   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:33.283126   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:32:33.283137   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:32:33.302682   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:32:33.302689   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:32:33.314254   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:32:33.314269   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:32:33.326157   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:32:33.326169   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:32:33.337815   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:32:33.337826   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:32:33.354586   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:33.354596   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:33.390375   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:33.390386   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:33.394726   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:32:33.394733   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:32:33.408745   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:32:33.408759   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:32:33.434586   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:33.434597   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:33.457209   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:32:33.457219   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:32:33.468026   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:32:33.468035   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:33.480575   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:32:33.480587   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:32:33.501537   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:32:33.501548   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:32:33.515678   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:32:33.515691   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:32:33.528249   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:32:33.528260   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:32:36.042026   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:41.042803   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:41.043140   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:41.061585   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:32:41.061687   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:41.076507   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:32:41.076587   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:41.088241   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:32:41.088309   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:41.099480   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:32:41.099551   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:41.109971   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:32:41.110039   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:41.120518   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:32:41.120589   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:41.130119   16583 logs.go:276] 0 containers: []
	W0610 04:32:41.130130   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:41.130182   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:41.140869   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:32:41.140891   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:32:41.140897   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:32:41.164995   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:32:41.165006   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:32:41.176320   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:32:41.176332   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:32:41.190687   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:32:41.190698   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:32:41.208420   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:32:41.208429   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:32:41.221518   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:32:41.221529   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:32:41.235261   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:32:41.235271   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:32:41.247717   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:32:41.247728   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:32:41.259253   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:32:41.259264   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:41.270712   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:32:41.270722   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:32:41.285076   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:32:41.285088   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:32:41.296781   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:41.296791   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:41.331036   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:32:41.331053   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:32:41.345502   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:32:41.345516   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:32:41.357268   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:41.357282   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:41.380895   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:41.380901   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:41.418248   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:41.418257   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:43.924158   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:48.926583   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:48.926844   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:48.947380   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:32:48.947472   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:48.961658   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:32:48.961740   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:48.974063   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:32:48.974135   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:48.986670   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:32:48.986741   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:49.000663   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:32:49.000740   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:49.037265   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:32:49.037346   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:49.050976   16583 logs.go:276] 0 containers: []
	W0610 04:32:49.050993   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:49.051059   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:49.062226   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:32:49.062248   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:32:49.062255   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:32:49.075997   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:32:49.076007   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:32:49.087554   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:49.087567   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:49.124478   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:32:49.124490   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:32:49.138801   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:32:49.138811   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:32:49.150446   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:32:49.150459   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:32:49.161988   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:32:49.161997   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:32:49.173154   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:49.173179   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:49.195107   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:49.195114   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:49.199202   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:32:49.199208   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:32:49.226012   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:32:49.226021   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:49.237464   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:49.237473   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:49.273849   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:32:49.273858   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:32:49.291047   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:32:49.291056   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:32:49.307835   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:32:49.307847   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:32:49.319899   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:32:49.319909   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:32:49.334197   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:32:49.334206   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:32:51.852221   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:32:56.853791   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:32:56.853917   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:32:56.868654   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:32:56.868748   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:32:56.881218   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:32:56.881298   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:32:56.891715   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:32:56.891775   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:32:56.902132   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:32:56.902202   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:32:56.912638   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:32:56.912703   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:32:56.923252   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:32:56.923326   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:32:56.936845   16583 logs.go:276] 0 containers: []
	W0610 04:32:56.936855   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:32:56.936915   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:32:56.947611   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:32:56.947629   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:32:56.947635   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:32:56.962517   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:32:56.962530   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:32:56.976879   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:32:56.976888   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:32:56.991490   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:32:56.991500   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:32:57.003481   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:32:57.003490   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:32:57.026649   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:32:57.026655   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:32:57.063796   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:32:57.063805   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:32:57.103581   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:32:57.103592   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:32:57.118242   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:32:57.118252   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:32:57.130316   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:32:57.130328   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:32:57.144051   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:32:57.144063   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:32:57.163587   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:32:57.163599   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:32:57.168201   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:32:57.168209   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:32:57.192638   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:32:57.192648   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:32:57.206648   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:32:57.206659   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:32:57.218084   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:32:57.218095   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:32:57.229262   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:32:57.229272   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:32:59.743622   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:04.746018   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:04.746324   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:04.774333   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:33:04.774462   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:04.792297   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:33:04.792409   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:04.806212   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:33:04.806285   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:04.817807   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:33:04.817877   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:04.827951   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:33:04.828027   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:04.844358   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:33:04.844431   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:04.854646   16583 logs.go:276] 0 containers: []
	W0610 04:33:04.854657   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:04.854715   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:04.865140   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:33:04.865159   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:04.865164   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:04.887952   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:33:04.887963   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:33:04.910904   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:33:04.910914   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:33:04.925049   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:33:04.925060   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:33:04.937141   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:33:04.937153   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:33:04.951269   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:33:04.951280   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:33:04.962345   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:33:04.962356   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:33:04.978180   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:33:04.978192   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:33:04.989469   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:33:04.989482   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:05.001100   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:05.001111   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:05.034967   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:33:05.034978   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:33:05.049551   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:33:05.049561   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:33:05.074396   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:33:05.074405   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:33:05.089600   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:33:05.089609   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:33:05.101932   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:33:05.101943   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:33:05.118373   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:05.118386   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:05.155704   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:05.155715   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:07.662549   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:12.665174   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:12.665564   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:12.703245   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:33:12.703382   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:12.727329   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:33:12.727442   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:12.742267   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:33:12.742349   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:12.754547   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:33:12.754622   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:12.765044   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:33:12.765118   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:12.777190   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:33:12.777276   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:12.787915   16583 logs.go:276] 0 containers: []
	W0610 04:33:12.787925   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:12.787985   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:12.798803   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:33:12.798819   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:33:12.798827   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:33:12.817416   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:33:12.817430   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:33:12.834701   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:12.834711   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:12.870035   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:33:12.870052   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:33:12.883942   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:33:12.883953   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:33:12.912485   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:33:12.912493   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:33:12.925470   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:33:12.925484   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:33:12.940133   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:33:12.940144   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:33:12.951351   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:12.951361   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:12.973007   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:33:12.973014   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:12.984513   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:12.984525   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:12.989108   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:33:12.989115   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:33:13.002725   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:33:13.002735   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:33:13.014321   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:33:13.014332   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:33:13.032414   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:33:13.032425   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:33:13.050910   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:13.050920   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:13.087162   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:33:13.087171   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:33:15.602416   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:20.604765   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:20.604884   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:33:20.617062   16583 logs.go:276] 2 containers: [7ebe9c78889a 1fe2347ffc09]
	I0610 04:33:20.617135   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:33:20.627392   16583 logs.go:276] 2 containers: [58df8de1f1c7 0e7a95d293ba]
	I0610 04:33:20.627459   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:33:20.640726   16583 logs.go:276] 1 containers: [5a8a8efaac51]
	I0610 04:33:20.640800   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:33:20.662498   16583 logs.go:276] 2 containers: [cb5d25a52290 882aa3940d87]
	I0610 04:33:20.662572   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:33:20.672936   16583 logs.go:276] 1 containers: [02819db3325a]
	I0610 04:33:20.673002   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:33:20.683503   16583 logs.go:276] 2 containers: [441cfe522307 1dbd81d082f2]
	I0610 04:33:20.683566   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:33:20.693514   16583 logs.go:276] 0 containers: []
	W0610 04:33:20.693525   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:33:20.693577   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:33:20.703916   16583 logs.go:276] 2 containers: [2d0ce52adfe5 0e0713873d07]
	I0610 04:33:20.703936   16583 logs.go:123] Gathering logs for kube-proxy [02819db3325a] ...
	I0610 04:33:20.703941   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02819db3325a"
	I0610 04:33:20.716076   16583 logs.go:123] Gathering logs for storage-provisioner [0e0713873d07] ...
	I0610 04:33:20.716087   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e0713873d07"
	I0610 04:33:20.727786   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:33:20.727800   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:33:20.743524   16583 logs.go:123] Gathering logs for kube-apiserver [7ebe9c78889a] ...
	I0610 04:33:20.743534   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ebe9c78889a"
	I0610 04:33:20.760634   16583 logs.go:123] Gathering logs for coredns [5a8a8efaac51] ...
	I0610 04:33:20.760644   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a8a8efaac51"
	I0610 04:33:20.778363   16583 logs.go:123] Gathering logs for kube-scheduler [882aa3940d87] ...
	I0610 04:33:20.778377   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 882aa3940d87"
	I0610 04:33:20.793964   16583 logs.go:123] Gathering logs for kube-apiserver [1fe2347ffc09] ...
	I0610 04:33:20.793976   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fe2347ffc09"
	I0610 04:33:20.822194   16583 logs.go:123] Gathering logs for storage-provisioner [2d0ce52adfe5] ...
	I0610 04:33:20.822210   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d0ce52adfe5"
	I0610 04:33:20.833711   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:33:20.833729   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:33:20.870368   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:33:20.870378   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:33:20.874340   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:33:20.874348   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:33:20.908169   16583 logs.go:123] Gathering logs for kube-controller-manager [441cfe522307] ...
	I0610 04:33:20.908182   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 441cfe522307"
	I0610 04:33:20.925444   16583 logs.go:123] Gathering logs for kube-controller-manager [1dbd81d082f2] ...
	I0610 04:33:20.925456   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dbd81d082f2"
	I0610 04:33:20.938278   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:33:20.938288   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:33:20.961103   16583 logs.go:123] Gathering logs for etcd [58df8de1f1c7] ...
	I0610 04:33:20.961110   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58df8de1f1c7"
	I0610 04:33:20.981477   16583 logs.go:123] Gathering logs for etcd [0e7a95d293ba] ...
	I0610 04:33:20.981487   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7a95d293ba"
	I0610 04:33:21.004331   16583 logs.go:123] Gathering logs for kube-scheduler [cb5d25a52290] ...
	I0610 04:33:21.004340   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb5d25a52290"
	I0610 04:33:23.518919   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:28.521479   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:28.521622   16583 kubeadm.go:591] duration metric: took 4m3.746237917s to restartPrimaryControlPlane
	W0610 04:33:28.521723   16583 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 04:33:28.521766   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0610 04:33:29.599866   16583 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.07808025s)
	I0610 04:33:29.599921   16583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 04:33:29.605083   16583 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 04:33:29.607921   16583 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 04:33:29.610710   16583 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 04:33:29.610715   16583 kubeadm.go:156] found existing configuration files:
	
	I0610 04:33:29.610734   16583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/admin.conf
	I0610 04:33:29.613348   16583 kubeadm.go:162] "https://control-plane.minikube.internal:53011" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 04:33:29.613368   16583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 04:33:29.616344   16583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/kubelet.conf
	I0610 04:33:29.619050   16583 kubeadm.go:162] "https://control-plane.minikube.internal:53011" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 04:33:29.619074   16583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 04:33:29.621578   16583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/controller-manager.conf
	I0610 04:33:29.624420   16583 kubeadm.go:162] "https://control-plane.minikube.internal:53011" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 04:33:29.624442   16583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 04:33:29.627373   16583 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/scheduler.conf
	I0610 04:33:29.629658   16583 kubeadm.go:162] "https://control-plane.minikube.internal:53011" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53011 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 04:33:29.629679   16583 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 04:33:29.632529   16583 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 04:33:29.650886   16583 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0610 04:33:29.650923   16583 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 04:33:29.699452   16583 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 04:33:29.699517   16583 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 04:33:29.699579   16583 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 04:33:29.749302   16583 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 04:33:29.752495   16583 out.go:204]   - Generating certificates and keys ...
	I0610 04:33:29.752531   16583 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 04:33:29.752564   16583 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 04:33:29.752602   16583 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 04:33:29.752657   16583 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 04:33:29.752695   16583 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 04:33:29.752730   16583 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 04:33:29.752759   16583 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 04:33:29.752800   16583 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 04:33:29.752838   16583 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 04:33:29.752872   16583 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 04:33:29.752888   16583 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 04:33:29.752927   16583 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 04:33:29.844788   16583 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 04:33:30.069005   16583 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 04:33:30.170691   16583 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 04:33:30.289261   16583 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 04:33:30.318199   16583 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 04:33:30.318556   16583 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 04:33:30.318638   16583 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 04:33:30.388407   16583 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 04:33:30.395534   16583 out.go:204]   - Booting up control plane ...
	I0610 04:33:30.395583   16583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 04:33:30.395666   16583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 04:33:30.395713   16583 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 04:33:30.395787   16583 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 04:33:30.395893   16583 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 04:33:35.399752   16583 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.003397 seconds
	I0610 04:33:35.399912   16583 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 04:33:35.409583   16583 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 04:33:35.928319   16583 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 04:33:35.928431   16583 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-227000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 04:33:36.431835   16583 kubeadm.go:309] [bootstrap-token] Using token: ykjh4q.pu79maw887wsxcg0
	I0610 04:33:36.438959   16583 out.go:204]   - Configuring RBAC rules ...
	I0610 04:33:36.439027   16583 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 04:33:36.439069   16583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 04:33:36.440603   16583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 04:33:36.441430   16583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 04:33:36.442281   16583 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 04:33:36.443127   16583 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 04:33:36.446588   16583 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 04:33:36.626364   16583 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 04:33:36.839888   16583 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 04:33:36.839969   16583 kubeadm.go:309] 
	I0610 04:33:36.840014   16583 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 04:33:36.840018   16583 kubeadm.go:309] 
	I0610 04:33:36.840052   16583 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 04:33:36.840099   16583 kubeadm.go:309] 
	I0610 04:33:36.840149   16583 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 04:33:36.840244   16583 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 04:33:36.840350   16583 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 04:33:36.840359   16583 kubeadm.go:309] 
	I0610 04:33:36.840428   16583 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 04:33:36.840439   16583 kubeadm.go:309] 
	I0610 04:33:36.840510   16583 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 04:33:36.840521   16583 kubeadm.go:309] 
	I0610 04:33:36.840579   16583 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 04:33:36.840681   16583 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 04:33:36.840785   16583 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 04:33:36.840793   16583 kubeadm.go:309] 
	I0610 04:33:36.840964   16583 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 04:33:36.841034   16583 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 04:33:36.841039   16583 kubeadm.go:309] 
	I0610 04:33:36.841178   16583 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ykjh4q.pu79maw887wsxcg0 \
	I0610 04:33:36.841282   16583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:56b5bf6ce93f42fffc51be5724cc4c4fa0c9b611b35ba669ffa5cef3ff8fcf22 \
	I0610 04:33:36.841348   16583 kubeadm.go:309] 	--control-plane 
	I0610 04:33:36.841392   16583 kubeadm.go:309] 
	I0610 04:33:36.841434   16583 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 04:33:36.841442   16583 kubeadm.go:309] 
	I0610 04:33:36.841490   16583 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ykjh4q.pu79maw887wsxcg0 \
	I0610 04:33:36.841567   16583 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:56b5bf6ce93f42fffc51be5724cc4c4fa0c9b611b35ba669ffa5cef3ff8fcf22 
	I0610 04:33:36.841626   16583 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 04:33:36.841640   16583 cni.go:84] Creating CNI manager for ""
	I0610 04:33:36.841648   16583 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:33:36.844480   16583 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 04:33:36.848366   16583 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 04:33:36.853152   16583 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 04:33:36.857996   16583 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 04:33:36.858089   16583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-227000 minikube.k8s.io/updated_at=2024_06_10T04_33_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c2b65c1940ca3bdd8a4d1a84aa1ecb6d007e0b42 minikube.k8s.io/name=stopped-upgrade-227000 minikube.k8s.io/primary=true
	I0610 04:33:36.858095   16583 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 04:33:36.861104   16583 ops.go:34] apiserver oom_adj: -16
	I0610 04:33:36.894287   16583 kubeadm.go:1107] duration metric: took 36.276542ms to wait for elevateKubeSystemPrivileges
	W0610 04:33:36.903303   16583 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 04:33:36.903313   16583 kubeadm.go:393] duration metric: took 4m12.145067875s to StartCluster
	I0610 04:33:36.903323   16583 settings.go:142] acquiring lock: {Name:mk6aafede331d0a23ef380eee9d6038b0fb4c41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:33:36.903488   16583 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:33:36.903894   16583 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/kubeconfig: {Name:mke1ab156d45cd5cbace7e8cb5713141e8116718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:33:36.904120   16583 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:33:36.908286   16583 out.go:177] * Verifying Kubernetes components...
	I0610 04:33:36.904132   16583 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 04:33:36.904207   16583 config.go:182] Loaded profile config "stopped-upgrade-227000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 04:33:36.915294   16583 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-227000"
	I0610 04:33:36.915301   16583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 04:33:36.915315   16583 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-227000"
	W0610 04:33:36.915318   16583 addons.go:243] addon storage-provisioner should already be in state true
	I0610 04:33:36.915302   16583 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-227000"
	I0610 04:33:36.915334   16583 host.go:66] Checking if "stopped-upgrade-227000" exists ...
	I0610 04:33:36.915346   16583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-227000"
	I0610 04:33:36.916728   16583 kapi.go:59] client config for stopped-upgrade-227000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/stopped-upgrade-227000/client.key", CAFile:"/Users/jenkins/minikube-integration/19052-14289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ee4460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 04:33:36.916844   16583 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-227000"
	W0610 04:33:36.916850   16583 addons.go:243] addon default-storageclass should already be in state true
	I0610 04:33:36.916857   16583 host.go:66] Checking if "stopped-upgrade-227000" exists ...
	I0610 04:33:36.921329   16583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 04:33:36.925368   16583 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 04:33:36.925383   16583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 04:33:36.925395   16583 sshutil.go:53] new ssh client: &{IP:localhost Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/id_rsa Username:docker}
	I0610 04:33:36.926258   16583 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 04:33:36.926263   16583 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 04:33:36.926268   16583 sshutil.go:53] new ssh client: &{IP:localhost Port:52979 SSHKeyPath:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/stopped-upgrade-227000/id_rsa Username:docker}
	I0610 04:33:36.989722   16583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 04:33:36.994984   16583 api_server.go:52] waiting for apiserver process to appear ...
	I0610 04:33:36.995027   16583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 04:33:36.998496   16583 api_server.go:72] duration metric: took 94.364917ms to wait for apiserver process to appear ...
	I0610 04:33:36.998503   16583 api_server.go:88] waiting for apiserver healthz status ...
	I0610 04:33:36.998510   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:37.019186   16583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 04:33:37.020870   16583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 04:33:41.999754   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:41.999773   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:47.000636   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:47.000657   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:52.001188   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:52.001210   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:33:57.001645   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:33:57.001666   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:02.002144   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:02.002173   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:07.003084   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:07.003154   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0610 04:34:07.392875   16583 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0610 04:34:07.397000   16583 out.go:177] * Enabled addons: storage-provisioner
	I0610 04:34:07.404764   16583 addons.go:510] duration metric: took 30.500424166s for enable addons: enabled=[storage-provisioner]
	I0610 04:34:12.004013   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:12.004055   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:17.004730   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:17.004770   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:22.005451   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:22.005476   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:27.007406   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:27.007456   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:32.009010   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:32.009054   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:37.011405   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:37.011566   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:34:37.025905   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:34:37.025969   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:34:37.036371   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:34:37.036428   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:34:37.046810   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:34:37.046883   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:34:37.057709   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:34:37.057769   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:34:37.067983   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:34:37.068058   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:34:37.078103   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:34:37.078170   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:34:37.088845   16583 logs.go:276] 0 containers: []
	W0610 04:34:37.088857   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:34:37.088918   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:34:37.099311   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:34:37.099325   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:34:37.099331   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:34:37.117587   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:34:37.117597   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:34:37.129757   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:34:37.129768   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:34:37.148495   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:34:37.148508   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:34:37.168965   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:34:37.168976   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:34:37.184926   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:34:37.184938   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:34:37.196629   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:34:37.196641   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:34:37.200853   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:34:37.200862   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:34:37.236490   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:34:37.236500   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:34:37.251827   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:34:37.251840   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:34:37.263970   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:34:37.263987   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:34:37.276030   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:34:37.276044   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:34:37.300191   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:34:37.300204   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:34:39.838834   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:44.841115   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:44.841222   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:34:44.852688   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:34:44.852758   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:34:44.863676   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:34:44.863747   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:34:44.875567   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:34:44.875631   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:34:44.886234   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:34:44.886304   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:34:44.904439   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:34:44.904511   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:34:44.915053   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:34:44.915120   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:34:44.925496   16583 logs.go:276] 0 containers: []
	W0610 04:34:44.925509   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:34:44.925575   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:34:44.938318   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:34:44.938334   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:34:44.938340   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:34:44.949820   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:34:44.949833   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:34:44.975038   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:34:44.975050   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:34:44.986555   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:34:44.986568   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:34:45.025823   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:34:45.025836   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:34:45.030249   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:34:45.030255   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:34:45.042003   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:34:45.042013   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:34:45.059430   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:34:45.059443   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:34:45.071754   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:34:45.071767   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:34:45.087092   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:34:45.087102   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:34:45.121786   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:34:45.121798   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:34:45.136037   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:34:45.136048   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:34:45.150436   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:34:45.150446   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:34:47.664274   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:34:52.666533   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:34:52.666737   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:34:52.691895   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:34:52.692014   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:34:52.718364   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:34:52.718444   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:34:52.730653   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:34:52.730717   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:34:52.741587   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:34:52.741657   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:34:52.752099   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:34:52.752177   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:34:52.766693   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:34:52.766766   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:34:52.777649   16583 logs.go:276] 0 containers: []
	W0610 04:34:52.777660   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:34:52.777718   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:34:52.788290   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:34:52.788304   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:34:52.788310   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:34:52.800200   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:34:52.800209   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:34:52.812586   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:34:52.812599   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:34:52.824532   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:34:52.824543   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:34:52.863022   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:34:52.863030   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:34:52.877773   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:34:52.877784   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:34:52.889714   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:34:52.889725   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:34:52.905404   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:34:52.905416   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:34:52.923290   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:34:52.923299   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:34:52.948050   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:34:52.948058   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:34:52.952029   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:34:52.952036   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:34:52.995943   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:34:52.995954   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:34:53.010024   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:34:53.010036   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:34:55.524037   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:00.525920   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:00.526207   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:00.560561   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:35:00.560693   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:00.580344   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:35:00.580438   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:00.595187   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:35:00.595260   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:00.607484   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:35:00.607554   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:00.618756   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:35:00.618820   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:00.632952   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:35:00.633018   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:00.643062   16583 logs.go:276] 0 containers: []
	W0610 04:35:00.643072   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:00.643126   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:00.653752   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:35:00.653769   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:35:00.653776   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:35:00.668055   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:35:00.668065   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:35:00.683962   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:35:00.683973   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:35:00.705127   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:00.705139   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:00.729782   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:35:00.729795   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:00.741665   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:00.741676   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:00.783548   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:00.783561   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:00.788214   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:35:00.788225   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:35:00.805003   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:35:00.805014   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:35:00.816347   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:35:00.816359   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:35:00.827462   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:35:00.827473   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:35:00.843062   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:00.843072   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:00.881025   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:35:00.881038   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:35:03.397462   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:08.399773   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:08.399959   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:08.431293   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:35:08.431421   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:08.448879   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:35:08.448961   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:08.461739   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:35:08.461811   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:08.474221   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:35:08.474297   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:08.486705   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:35:08.486767   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:08.497658   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:35:08.497724   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:08.507610   16583 logs.go:276] 0 containers: []
	W0610 04:35:08.507624   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:08.507690   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:08.518460   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:35:08.518478   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:08.518485   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:08.542508   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:35:08.542518   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:08.554111   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:08.554127   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:08.591189   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:35:08.591200   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:35:08.602365   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:35:08.602380   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:35:08.616615   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:35:08.616630   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:35:08.630538   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:35:08.630551   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:35:08.641832   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:35:08.641843   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:35:08.653329   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:35:08.653342   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:35:08.671950   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:35:08.671961   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:35:08.688316   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:08.688327   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:08.692576   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:08.692584   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:08.727036   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:35:08.727048   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:35:11.245185   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:16.247455   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:16.247624   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:16.273488   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:35:16.273592   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:16.291266   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:35:16.291341   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:16.301879   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:35:16.301952   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:16.312547   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:35:16.312618   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:16.322806   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:35:16.322878   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:16.333143   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:35:16.333217   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:16.343323   16583 logs.go:276] 0 containers: []
	W0610 04:35:16.343334   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:16.343386   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:16.353868   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:35:16.353887   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:16.353892   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:16.390377   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:35:16.390391   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:35:16.402225   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:35:16.402237   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:35:16.414154   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:16.414164   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:16.438364   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:35:16.438372   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:35:16.450420   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:16.450430   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:16.487065   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:16.487074   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:16.491475   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:35:16.491482   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:35:16.505829   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:35:16.505841   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:35:16.522008   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:35:16.522020   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:35:16.533955   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:35:16.533969   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:35:16.549027   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:35:16.549040   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:35:16.566640   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:35:16.566650   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:19.079146   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:24.081445   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:24.081621   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:24.102336   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:35:24.102440   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:24.118370   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:35:24.118448   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:24.130343   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:35:24.130413   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:24.146041   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:35:24.146111   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:24.158091   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:35:24.158162   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:24.168533   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:35:24.168600   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:24.179144   16583 logs.go:276] 0 containers: []
	W0610 04:35:24.179156   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:24.179213   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:24.194361   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:35:24.194376   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:35:24.194381   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:35:24.205672   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:24.205683   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:24.242222   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:24.242231   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:24.282892   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:35:24.282906   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:35:24.296348   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:35:24.296359   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:35:24.317089   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:35:24.317104   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:35:24.328960   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:35:24.328972   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:35:24.346298   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:24.346309   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:24.350949   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:35:24.350957   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:35:24.365539   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:35:24.365549   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:35:24.378911   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:35:24.378922   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:35:24.390918   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:24.390929   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:24.414883   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:35:24.414891   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:26.928179   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:31.929620   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:31.929729   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:31.940500   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:35:31.940593   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:31.951407   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:35:31.951484   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:31.961945   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:35:31.962014   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:31.974571   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:35:31.974639   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:31.989046   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:35:31.989128   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:32.010975   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:35:32.011047   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:32.025864   16583 logs.go:276] 0 containers: []
	W0610 04:35:32.025873   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:32.025933   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:32.036241   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:35:32.036257   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:32.036262   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:32.072292   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:32.072299   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:32.076273   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:35:32.076281   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:35:32.090227   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:35:32.090238   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:35:32.107886   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:32.107897   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:32.132435   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:35:32.132443   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:32.143569   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:32.143585   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:32.185466   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:35:32.185477   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:35:32.200120   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:35:32.200135   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:35:32.212163   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:35:32.212173   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:35:32.223706   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:35:32.223721   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:35:32.239095   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:35:32.239106   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:35:32.251267   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:35:32.251279   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:35:34.765283   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:39.767633   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:39.767775   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:39.783734   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:35:39.783812   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:39.795027   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:35:39.795103   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:39.806243   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:35:39.806321   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:39.817167   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:35:39.817257   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:39.827824   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:35:39.827896   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:39.840576   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:35:39.840642   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:39.851421   16583 logs.go:276] 0 containers: []
	W0610 04:35:39.851433   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:39.851494   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:39.862597   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:35:39.862614   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:35:39.862620   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:35:39.874261   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:35:39.874271   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:35:39.885887   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:35:39.885897   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:35:39.897884   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:39.897893   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:39.922556   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:35:39.922566   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:39.934038   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:39.934051   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:39.972104   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:39.972114   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:40.008120   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:35:40.008132   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:35:40.024227   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:35:40.024239   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:35:40.038384   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:35:40.038396   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:35:40.053881   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:35:40.053892   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:35:40.072044   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:35:40.072054   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:35:40.089283   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:40.089294   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:42.593771   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:47.596120   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:47.596286   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:47.616411   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:35:47.616496   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:47.632729   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:35:47.632793   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:47.644863   16583 logs.go:276] 2 containers: [491b628d6903 3d224da1c15f]
	I0610 04:35:47.644931   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:47.655852   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:35:47.655909   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:47.666356   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:35:47.666422   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:47.676514   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:35:47.676575   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:47.689203   16583 logs.go:276] 0 containers: []
	W0610 04:35:47.689214   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:47.689271   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:47.699293   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:35:47.699310   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:35:47.699315   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:35:47.710925   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:35:47.710937   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:35:47.726411   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:47.726424   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:47.730592   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:47.730601   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:47.767103   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:35:47.767114   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:35:47.781166   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:35:47.781175   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:35:47.793256   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:35:47.793267   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:35:47.804820   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:35:47.804834   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:35:47.826105   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:35:47.826126   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:35:47.839859   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:47.839870   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:47.865400   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:47.865418   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:47.908512   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:35:47.908537   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:35:47.937053   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:35:47.937066   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:50.462510   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:35:55.464846   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:35:55.465026   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:35:55.482119   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:35:55.482190   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:35:55.495171   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:35:55.495243   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:35:55.507729   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:35:55.507816   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:35:55.517912   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:35:55.517972   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:35:55.530504   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:35:55.530563   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:35:55.540512   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:35:55.540571   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:35:55.557247   16583 logs.go:276] 0 containers: []
	W0610 04:35:55.557259   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:35:55.557319   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:35:55.567860   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:35:55.567880   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:35:55.567885   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:35:55.582507   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:35:55.582517   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:35:55.594060   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:35:55.594071   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:35:55.605439   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:35:55.605449   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:35:55.620792   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:35:55.620803   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:35:55.632409   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:35:55.632420   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:35:55.649877   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:35:55.649886   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:35:55.674551   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:35:55.674561   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:35:55.712997   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:35:55.713007   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:35:55.726985   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:35:55.726996   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:35:55.741171   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:35:55.741184   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:35:55.753153   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:35:55.753163   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:35:55.764765   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:35:55.764777   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:35:55.776476   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:35:55.776487   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:35:55.780686   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:35:55.780693   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:35:58.318420   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:03.320712   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:03.320811   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:03.333445   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:36:03.333522   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:03.344925   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:36:03.344990   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:03.355998   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:36:03.356070   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:03.367123   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:36:03.367192   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:03.377020   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:36:03.377075   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:03.387347   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:36:03.387404   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:03.397783   16583 logs.go:276] 0 containers: []
	W0610 04:36:03.397797   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:03.397855   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:03.408613   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:36:03.408630   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:03.408637   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:03.412923   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:03.412933   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:03.446552   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:36:03.446564   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:36:03.458277   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:36:03.458289   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:03.470245   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:36:03.470256   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:36:03.484554   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:36:03.484565   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:36:03.495562   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:36:03.495573   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:36:03.509080   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:36:03.509090   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:36:03.520920   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:36:03.520932   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:36:03.543113   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:36:03.543123   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:36:03.555944   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:36:03.555955   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:36:03.569730   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:03.569741   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:03.608099   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:36:03.608107   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:36:03.623622   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:36:03.623633   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:36:03.643654   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:03.643664   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:06.170225   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:11.172609   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:11.172940   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:11.206883   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:36:11.207013   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:11.226428   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:36:11.226530   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:11.241123   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:36:11.241208   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:11.253774   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:36:11.253843   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:11.273419   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:36:11.273487   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:11.284874   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:36:11.284949   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:11.295447   16583 logs.go:276] 0 containers: []
	W0610 04:36:11.295460   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:11.295521   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:11.305917   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:36:11.305934   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:11.305940   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:11.310347   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:36:11.310353   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:36:11.325966   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:36:11.325979   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:36:11.338479   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:36:11.338490   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:36:11.352660   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:36:11.352672   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:36:11.364160   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:36:11.364170   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:36:11.376400   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:36:11.376412   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:36:11.394075   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:36:11.394087   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:36:11.407318   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:11.407330   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:11.446396   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:11.446405   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:11.482314   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:36:11.482327   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:36:11.496002   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:36:11.496011   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:36:11.507578   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:36:11.507588   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:36:11.519997   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:11.520007   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:11.545220   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:36:11.545228   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:14.058354   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:19.060704   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:19.060916   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:19.089345   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:36:19.089431   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:19.102712   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:36:19.102787   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:19.119933   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:36:19.120000   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:19.130792   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:36:19.130863   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:19.141176   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:36:19.141242   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:19.151529   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:36:19.151603   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:19.161750   16583 logs.go:276] 0 containers: []
	W0610 04:36:19.161762   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:19.161819   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:19.172275   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:36:19.172291   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:19.172297   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:19.210093   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:36:19.210103   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:36:19.224748   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:36:19.224759   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:36:19.236824   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:36:19.236835   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:36:19.254144   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:36:19.254156   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:36:19.265867   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:36:19.265877   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:36:19.282918   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:36:19.282930   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:36:19.298772   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:36:19.298783   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:36:19.320493   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:19.320503   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:19.345319   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:36:19.345327   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:19.357011   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:19.357022   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:19.361244   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:19.361250   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:19.395278   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:36:19.395289   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:36:19.415091   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:36:19.415101   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:36:19.426663   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:36:19.426673   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:36:21.944434   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:26.946769   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:26.946872   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:26.959809   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:36:26.959877   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:26.970820   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:36:26.970877   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:26.983355   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:36:26.983417   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:26.994497   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:36:26.994569   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:27.006550   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:36:27.006618   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:27.017002   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:36:27.017072   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:27.027426   16583 logs.go:276] 0 containers: []
	W0610 04:36:27.027440   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:27.027497   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:27.041613   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:36:27.041634   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:27.041641   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:27.079518   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:27.079528   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:27.084524   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:27.084532   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:27.118323   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:36:27.118339   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:36:27.130299   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:36:27.130311   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:36:27.146798   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:36:27.146809   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:36:27.158348   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:36:27.158362   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:36:27.174782   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:36:27.174796   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:36:27.186286   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:36:27.186300   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:36:27.197781   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:36:27.197793   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:36:27.209884   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:36:27.209898   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:36:27.224090   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:36:27.224104   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:36:27.236388   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:36:27.236400   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:36:27.253688   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:27.253699   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:27.277614   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:36:27.277622   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:29.789165   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:34.790089   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:34.790217   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:34.804146   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:36:34.804229   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:34.822672   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:36:34.822745   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:34.833405   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:36:34.833471   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:34.844558   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:36:34.844627   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:34.855831   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:36:34.855894   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:34.866708   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:36:34.866777   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:34.876828   16583 logs.go:276] 0 containers: []
	W0610 04:36:34.876843   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:34.876901   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:34.887148   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:36:34.887167   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:34.887172   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:34.925652   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:36:34.925661   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:36:34.936902   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:36:34.936912   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:36:34.953684   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:34.953695   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:34.978146   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:36:34.978159   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:34.990647   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:34.990657   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:34.994803   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:36:34.994811   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:36:35.009136   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:36:35.009148   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:36:35.026869   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:36:35.026879   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:36:35.040172   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:36:35.040186   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:36:35.052153   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:35.052164   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:35.088214   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:36:35.088228   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:36:35.105674   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:36:35.105683   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:36:35.117704   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:36:35.117715   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:36:35.129469   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:36:35.129479   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:36:37.649248   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:42.651697   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:42.652069   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:42.694652   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:36:42.694784   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:42.720089   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:36:42.720189   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:42.735277   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:36:42.735358   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:42.747405   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:36:42.747474   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:42.758032   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:36:42.758104   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:42.771482   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:36:42.771553   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:42.782352   16583 logs.go:276] 0 containers: []
	W0610 04:36:42.782364   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:42.782427   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:42.795522   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:36:42.795540   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:36:42.795545   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:36:42.807776   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:36:42.807789   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:36:42.819462   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:36:42.819473   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:36:42.837600   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:36:42.837611   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:42.849488   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:42.849500   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:42.884212   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:36:42.884222   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:36:42.899484   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:36:42.899497   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:36:42.911499   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:36:42.911510   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:36:42.923461   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:42.923474   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:42.947490   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:42.947497   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:42.983306   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:36:42.983315   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:36:42.998600   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:36:42.998610   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:36:43.018282   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:36:43.018291   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:36:43.031346   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:43.031356   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:43.036276   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:36:43.036285   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:36:45.552497   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:50.554879   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:50.555159   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:50.588980   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:36:50.589096   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:50.608040   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:36:50.608131   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:50.622782   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:36:50.622861   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:50.634607   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:36:50.634671   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:50.646442   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:36:50.646506   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:50.657107   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:36:50.657169   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:50.667154   16583 logs.go:276] 0 containers: []
	W0610 04:36:50.667167   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:50.667224   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:50.677763   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:36:50.677781   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:36:50.677786   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:36:50.689897   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:36:50.689909   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:36:50.707116   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:36:50.707127   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:36:50.718893   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:36:50.718903   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:36:50.730443   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:36:50.730453   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:36:50.741965   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:36:50.741978   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:36:50.753961   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:50.753975   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:50.778179   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:50.778188   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:50.814556   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:36:50.814567   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:36:50.828640   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:36:50.828651   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:36:50.841123   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:50.841135   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:50.880892   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:36:50.880903   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:36:50.894850   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:36:50.894861   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:50.907109   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:50.907120   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:50.911589   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:36:50.911596   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:36:53.436007   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:36:58.438359   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:36:58.438454   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:36:58.448865   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:36:58.448932   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:36:58.459497   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:36:58.459565   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:36:58.474451   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:36:58.474518   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:36:58.484905   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:36:58.484980   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:36:58.495440   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:36:58.495503   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:36:58.510868   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:36:58.510931   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:36:58.520923   16583 logs.go:276] 0 containers: []
	W0610 04:36:58.520936   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:36:58.520990   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:36:58.531558   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:36:58.531582   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:36:58.531588   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:36:58.545702   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:36:58.545712   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:36:58.557731   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:36:58.557743   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:36:58.569518   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:36:58.569529   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:36:58.581142   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:36:58.581154   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:36:58.593589   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:36:58.593601   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:36:58.618038   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:36:58.618048   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:36:58.622184   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:36:58.622192   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:36:58.656050   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:36:58.656064   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:36:58.668146   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:36:58.668158   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:36:58.685362   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:36:58.685375   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:36:58.699083   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:36:58.699097   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:36:58.735924   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:36:58.735931   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:36:58.751716   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:36:58.751731   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:36:58.766386   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:36:58.766400   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:37:01.281314   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:06.283657   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:06.283863   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:06.314643   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:37:06.314733   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:06.328850   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:37:06.328916   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:06.340444   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:37:06.340520   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:06.350897   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:37:06.350973   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:06.361501   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:37:06.361563   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:06.372646   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:37:06.372711   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:06.382796   16583 logs.go:276] 0 containers: []
	W0610 04:37:06.382807   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:06.382862   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:06.393265   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:37:06.393286   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:37:06.393291   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:37:06.410810   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:06.410820   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:06.434247   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:37:06.434254   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:37:06.446065   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:06.446077   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:06.480551   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:37:06.480563   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:37:06.495104   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:37:06.495115   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:37:06.506704   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:06.506713   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:06.544393   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:37:06.544400   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:37:06.556357   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:37:06.556367   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:37:06.568107   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:37:06.568118   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:06.584008   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:06.584017   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:06.588672   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:37:06.588681   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:37:06.600737   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:37:06.600751   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:37:06.616447   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:37:06.616456   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:37:06.628511   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:37:06.628521   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:37:09.144771   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:14.147586   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:14.147793   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:14.165349   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:37:14.165433   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:14.179008   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:37:14.179086   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:14.192593   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:37:14.192666   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:14.207403   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:37:14.207465   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:14.217980   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:37:14.218046   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:14.228745   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:37:14.228807   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:14.238378   16583 logs.go:276] 0 containers: []
	W0610 04:37:14.238389   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:14.238443   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:14.248505   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:37:14.248534   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:37:14.248540   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:37:14.267025   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:37:14.267037   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:37:14.291323   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:37:14.291336   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:37:14.302388   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:14.302402   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:14.326301   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:37:14.326309   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:37:14.338322   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:37:14.338335   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:37:14.350865   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:37:14.350879   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:37:14.378890   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:14.378903   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:14.413205   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:37:14.413216   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:37:14.427958   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:37:14.427970   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:37:14.439520   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:14.439531   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:14.476057   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:14.476066   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:14.480625   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:37:14.480634   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:37:14.495256   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:37:14.495266   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:37:14.515193   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:37:14.515204   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:17.030034   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:22.031556   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:22.031748   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:22.053749   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:37:22.053866   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:22.068563   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:37:22.068637   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:22.081783   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:37:22.081854   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:22.097403   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:37:22.097495   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:22.107594   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:37:22.107653   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:22.118009   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:37:22.118078   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:22.131999   16583 logs.go:276] 0 containers: []
	W0610 04:37:22.132010   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:22.132064   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:22.142534   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:37:22.142555   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:37:22.142560   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:37:22.162249   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:37:22.162262   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:37:22.177456   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:22.177469   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:22.202627   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:37:22.202638   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:37:22.217624   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:37:22.217636   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:37:22.229446   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:37:22.229457   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:37:22.241848   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:22.241861   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:22.277524   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:37:22.277537   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:37:22.296592   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:37:22.296608   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:37:22.308893   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:37:22.308905   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:37:22.323501   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:22.323514   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:22.362570   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:37:22.362584   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:37:22.374065   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:37:22.374081   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:37:22.391768   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:37:22.391782   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:22.403664   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:22.403675   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:24.910411   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:29.912721   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:29.912931   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 04:37:29.934256   16583 logs.go:276] 1 containers: [b892941f9a3e]
	I0610 04:37:29.934339   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 04:37:29.952416   16583 logs.go:276] 1 containers: [8dc476d5278c]
	I0610 04:37:29.952486   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 04:37:29.964147   16583 logs.go:276] 4 containers: [360ed73bf6d9 1ca91723df81 491b628d6903 3d224da1c15f]
	I0610 04:37:29.964218   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 04:37:29.977295   16583 logs.go:276] 1 containers: [180035e13cd8]
	I0610 04:37:29.977357   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 04:37:29.987689   16583 logs.go:276] 1 containers: [92a4d756c9e4]
	I0610 04:37:29.987753   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 04:37:30.001729   16583 logs.go:276] 1 containers: [c2686720da2b]
	I0610 04:37:30.001791   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 04:37:30.012481   16583 logs.go:276] 0 containers: []
	W0610 04:37:30.012493   16583 logs.go:278] No container was found matching "kindnet"
	I0610 04:37:30.012550   16583 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 04:37:30.028519   16583 logs.go:276] 1 containers: [0318e814f6cb]
	I0610 04:37:30.028538   16583 logs.go:123] Gathering logs for Docker ...
	I0610 04:37:30.028543   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 04:37:30.052127   16583 logs.go:123] Gathering logs for dmesg ...
	I0610 04:37:30.052136   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 04:37:30.056514   16583 logs.go:123] Gathering logs for kube-proxy [92a4d756c9e4] ...
	I0610 04:37:30.056520   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92a4d756c9e4"
	I0610 04:37:30.068381   16583 logs.go:123] Gathering logs for storage-provisioner [0318e814f6cb] ...
	I0610 04:37:30.068392   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0318e814f6cb"
	I0610 04:37:30.080418   16583 logs.go:123] Gathering logs for kube-apiserver [b892941f9a3e] ...
	I0610 04:37:30.080429   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b892941f9a3e"
	I0610 04:37:30.094719   16583 logs.go:123] Gathering logs for coredns [3d224da1c15f] ...
	I0610 04:37:30.094730   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d224da1c15f"
	I0610 04:37:30.106076   16583 logs.go:123] Gathering logs for kube-scheduler [180035e13cd8] ...
	I0610 04:37:30.106087   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 180035e13cd8"
	I0610 04:37:30.121791   16583 logs.go:123] Gathering logs for container status ...
	I0610 04:37:30.121801   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 04:37:30.133523   16583 logs.go:123] Gathering logs for kubelet ...
	I0610 04:37:30.133532   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 04:37:30.171534   16583 logs.go:123] Gathering logs for etcd [8dc476d5278c] ...
	I0610 04:37:30.171542   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc476d5278c"
	I0610 04:37:30.185409   16583 logs.go:123] Gathering logs for coredns [1ca91723df81] ...
	I0610 04:37:30.185419   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ca91723df81"
	I0610 04:37:30.201339   16583 logs.go:123] Gathering logs for coredns [491b628d6903] ...
	I0610 04:37:30.201353   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 491b628d6903"
	I0610 04:37:30.213363   16583 logs.go:123] Gathering logs for kube-controller-manager [c2686720da2b] ...
	I0610 04:37:30.213372   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2686720da2b"
	I0610 04:37:30.230439   16583 logs.go:123] Gathering logs for describe nodes ...
	I0610 04:37:30.230450   16583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 04:37:30.264980   16583 logs.go:123] Gathering logs for coredns [360ed73bf6d9] ...
	I0610 04:37:30.264991   16583 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360ed73bf6d9"
	I0610 04:37:32.779627   16583 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 04:37:37.781983   16583 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 04:37:37.787701   16583 out.go:177] 
	W0610 04:37:37.791637   16583 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0610 04:37:37.791655   16583 out.go:239] * 
	* 
	W0610 04:37:37.792517   16583 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:37:37.803552   16583 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-227000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (593.84s)

                                                
                                    
x
+
TestPause/serial/Start (9.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-029000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-029000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.764437458s)

                                                
                                                
-- stdout --
	* [pause-029000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-029000" primary control-plane node in "pause-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-029000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-029000 -n pause-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-029000 -n pause-029000: exit status 7 (46.547375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-448000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-448000 --driver=qemu2 : exit status 80 (9.786004958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-448000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-448000" primary control-plane node in "NoKubernetes-448000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-448000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-448000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-448000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-448000 -n NoKubernetes-448000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-448000 -n NoKubernetes-448000: exit status 7 (51.060416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-448000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-448000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-448000 --no-kubernetes --driver=qemu2 : exit status 80 (7.629326167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-448000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-448000
	* Restarting existing qemu2 VM for "NoKubernetes-448000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-448000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-448000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-448000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-448000 -n NoKubernetes-448000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-448000 -n NoKubernetes-448000: exit status 7 (50.769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-448000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-448000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-448000 --no-kubernetes --driver=qemu2 : exit status 80 (7.605133791s)

                                                
                                                
-- stdout --
	* [NoKubernetes-448000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-448000
	* Restarting existing qemu2 VM for "NoKubernetes-448000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-448000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-448000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-448000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-448000 -n NoKubernetes-448000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-448000 -n NoKubernetes-448000: exit status 7 (46.636416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-448000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.65s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19052
- KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2178177956/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.68s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-448000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-448000 --driver=qemu2 : exit status 80 (5.4373445s)

                                                
                                                
-- stdout --
	* [NoKubernetes-448000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-448000
	* Restarting existing qemu2 VM for "NoKubernetes-448000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-448000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-448000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-448000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-448000 -n NoKubernetes-448000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-448000 -n NoKubernetes-448000: exit status 7 (65.613167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-448000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.822079208s)

                                                
                                                
-- stdout --
	* [auto-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-463000" primary control-plane node in "auto-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:39:29.811143   17138 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:39:29.811287   17138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:39:29.811290   17138 out.go:304] Setting ErrFile to fd 2...
	I0610 04:39:29.811292   17138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:39:29.811421   17138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:39:29.812482   17138 out.go:298] Setting JSON to false
	I0610 04:39:29.828599   17138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9540,"bootTime":1718010029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:39:29.828667   17138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:39:29.835007   17138 out.go:177] * [auto-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:39:29.845007   17138 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:39:29.845037   17138 notify.go:220] Checking for updates...
	I0610 04:39:29.851858   17138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:39:29.854937   17138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:39:29.858006   17138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:39:29.860888   17138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:39:29.863904   17138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:39:29.867349   17138 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:39:29.867416   17138 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:39:29.867469   17138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:39:29.871930   17138 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:39:29.878984   17138 start.go:297] selected driver: qemu2
	I0610 04:39:29.878990   17138 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:39:29.878995   17138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:39:29.881164   17138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:39:29.884913   17138 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:39:29.888045   17138 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:39:29.888080   17138 cni.go:84] Creating CNI manager for ""
	I0610 04:39:29.888088   17138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:39:29.888092   17138 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:39:29.888126   17138 start.go:340] cluster config:
	{Name:auto-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:39:29.892854   17138 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:39:29.899931   17138 out.go:177] * Starting "auto-463000" primary control-plane node in "auto-463000" cluster
	I0610 04:39:29.902889   17138 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:39:29.902906   17138 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:39:29.902916   17138 cache.go:56] Caching tarball of preloaded images
	I0610 04:39:29.903002   17138 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:39:29.903008   17138 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:39:29.903069   17138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/auto-463000/config.json ...
	I0610 04:39:29.903081   17138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/auto-463000/config.json: {Name:mk7d2887e0a5d56fc614ad9b8761dcb06b75e7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:39:29.903339   17138 start.go:360] acquireMachinesLock for auto-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:39:29.903376   17138 start.go:364] duration metric: took 30.459µs to acquireMachinesLock for "auto-463000"
	I0610 04:39:29.903389   17138 start.go:93] Provisioning new machine with config: &{Name:auto-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:39:29.903428   17138 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:39:29.911848   17138 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:39:29.930505   17138 start.go:159] libmachine.API.Create for "auto-463000" (driver="qemu2")
	I0610 04:39:29.930536   17138 client.go:168] LocalClient.Create starting
	I0610 04:39:29.930608   17138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:39:29.930638   17138 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:29.930649   17138 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:29.930690   17138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:39:29.930713   17138 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:29.930722   17138 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:29.931085   17138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:39:30.076376   17138 main.go:141] libmachine: Creating SSH key...
	I0610 04:39:30.117774   17138 main.go:141] libmachine: Creating Disk image...
	I0610 04:39:30.117780   17138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:39:30.117950   17138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2
	I0610 04:39:30.130698   17138 main.go:141] libmachine: STDOUT: 
	I0610 04:39:30.130719   17138 main.go:141] libmachine: STDERR: 
	I0610 04:39:30.130791   17138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2 +20000M
	I0610 04:39:30.141588   17138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:39:30.141609   17138 main.go:141] libmachine: STDERR: 
	I0610 04:39:30.141622   17138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2
	I0610 04:39:30.141631   17138 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:39:30.141669   17138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:47:53:98:33:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2
	I0610 04:39:30.143361   17138 main.go:141] libmachine: STDOUT: 
	I0610 04:39:30.143375   17138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:39:30.143395   17138 client.go:171] duration metric: took 212.85ms to LocalClient.Create
	I0610 04:39:32.145612   17138 start.go:128] duration metric: took 2.242142209s to createHost
	I0610 04:39:32.145668   17138 start.go:83] releasing machines lock for "auto-463000", held for 2.242266334s
	W0610 04:39:32.145779   17138 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:32.162046   17138 out.go:177] * Deleting "auto-463000" in qemu2 ...
	W0610 04:39:32.190301   17138 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:32.190323   17138 start.go:728] Will try again in 5 seconds ...
	I0610 04:39:37.192516   17138 start.go:360] acquireMachinesLock for auto-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:39:37.192945   17138 start.go:364] duration metric: took 357.25µs to acquireMachinesLock for "auto-463000"
	I0610 04:39:37.193068   17138 start.go:93] Provisioning new machine with config: &{Name:auto-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:39:37.193333   17138 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:39:37.199038   17138 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:39:37.247012   17138 start.go:159] libmachine.API.Create for "auto-463000" (driver="qemu2")
	I0610 04:39:37.247069   17138 client.go:168] LocalClient.Create starting
	I0610 04:39:37.247174   17138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:39:37.247248   17138 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:37.247265   17138 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:37.247326   17138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:39:37.247369   17138 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:37.247382   17138 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:37.247960   17138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:39:37.404142   17138 main.go:141] libmachine: Creating SSH key...
	I0610 04:39:37.533839   17138 main.go:141] libmachine: Creating Disk image...
	I0610 04:39:37.533844   17138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:39:37.534027   17138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2
	I0610 04:39:37.546596   17138 main.go:141] libmachine: STDOUT: 
	I0610 04:39:37.546615   17138 main.go:141] libmachine: STDERR: 
	I0610 04:39:37.546664   17138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2 +20000M
	I0610 04:39:37.557486   17138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:39:37.557501   17138 main.go:141] libmachine: STDERR: 
	I0610 04:39:37.557514   17138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2
	I0610 04:39:37.557519   17138 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:39:37.557554   17138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:6e:47:99:7d:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/auto-463000/disk.qcow2
	I0610 04:39:37.559323   17138 main.go:141] libmachine: STDOUT: 
	I0610 04:39:37.559337   17138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:39:37.559349   17138 client.go:171] duration metric: took 312.272916ms to LocalClient.Create
	I0610 04:39:39.561534   17138 start.go:128] duration metric: took 2.36812125s to createHost
	I0610 04:39:39.561588   17138 start.go:83] releasing machines lock for "auto-463000", held for 2.368602292s
	W0610 04:39:39.562015   17138 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:39.571505   17138 out.go:177] 
	W0610 04:39:39.578639   17138 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:39:39.578711   17138 out.go:239] * 
	* 
	W0610 04:39:39.581105   17138 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:39:39.590570   17138 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.83027625s)

                                                
                                                
-- stdout --
	* [kindnet-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-463000" primary control-plane node in "kindnet-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:39:41.823667   17250 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:39:41.823796   17250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:39:41.823799   17250 out.go:304] Setting ErrFile to fd 2...
	I0610 04:39:41.823805   17250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:39:41.823944   17250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:39:41.824986   17250 out.go:298] Setting JSON to false
	I0610 04:39:41.841081   17250 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9552,"bootTime":1718010029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:39:41.841147   17250 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:39:41.847194   17250 out.go:177] * [kindnet-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:39:41.853094   17250 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:39:41.856101   17250 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:39:41.853158   17250 notify.go:220] Checking for updates...
	I0610 04:39:41.861984   17250 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:39:41.865061   17250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:39:41.868063   17250 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:39:41.871086   17250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:39:41.874387   17250 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:39:41.874471   17250 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:39:41.874518   17250 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:39:41.879065   17250 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:39:41.886066   17250 start.go:297] selected driver: qemu2
	I0610 04:39:41.886072   17250 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:39:41.886081   17250 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:39:41.888157   17250 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:39:41.891028   17250 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:39:41.894095   17250 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:39:41.894115   17250 cni.go:84] Creating CNI manager for "kindnet"
	I0610 04:39:41.894121   17250 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 04:39:41.894161   17250 start.go:340] cluster config:
	{Name:kindnet-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:39:41.898708   17250 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:39:41.905983   17250 out.go:177] * Starting "kindnet-463000" primary control-plane node in "kindnet-463000" cluster
	I0610 04:39:41.910072   17250 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:39:41.910091   17250 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:39:41.910104   17250 cache.go:56] Caching tarball of preloaded images
	I0610 04:39:41.910176   17250 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:39:41.910185   17250 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:39:41.910253   17250 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/kindnet-463000/config.json ...
	I0610 04:39:41.910265   17250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/kindnet-463000/config.json: {Name:mk80af2c2ba27c73a5c827b63ca47da02004bf23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:39:41.910505   17250 start.go:360] acquireMachinesLock for kindnet-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:39:41.910542   17250 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "kindnet-463000"
	I0610 04:39:41.910554   17250 start.go:93] Provisioning new machine with config: &{Name:kindnet-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:39:41.910586   17250 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:39:41.914120   17250 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:39:41.932174   17250 start.go:159] libmachine.API.Create for "kindnet-463000" (driver="qemu2")
	I0610 04:39:41.932204   17250 client.go:168] LocalClient.Create starting
	I0610 04:39:41.932260   17250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:39:41.932292   17250 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:41.932302   17250 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:41.932347   17250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:39:41.932370   17250 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:41.932381   17250 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:41.932765   17250 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:39:42.078535   17250 main.go:141] libmachine: Creating SSH key...
	I0610 04:39:42.162929   17250 main.go:141] libmachine: Creating Disk image...
	I0610 04:39:42.162934   17250 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:39:42.163140   17250 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2
	I0610 04:39:42.175840   17250 main.go:141] libmachine: STDOUT: 
	I0610 04:39:42.175862   17250 main.go:141] libmachine: STDERR: 
	I0610 04:39:42.175931   17250 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2 +20000M
	I0610 04:39:42.186957   17250 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:39:42.186982   17250 main.go:141] libmachine: STDERR: 
	I0610 04:39:42.186992   17250 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2
	I0610 04:39:42.186998   17250 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:39:42.187025   17250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:4f:eb:ac:c8:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2
	I0610 04:39:42.188696   17250 main.go:141] libmachine: STDOUT: 
	I0610 04:39:42.188720   17250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:39:42.188740   17250 client.go:171] duration metric: took 256.529166ms to LocalClient.Create
	I0610 04:39:44.190932   17250 start.go:128] duration metric: took 2.280312167s to createHost
	I0610 04:39:44.191037   17250 start.go:83] releasing machines lock for "kindnet-463000", held for 2.280419209s
	W0610 04:39:44.191092   17250 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:44.206924   17250 out.go:177] * Deleting "kindnet-463000" in qemu2 ...
	W0610 04:39:44.235859   17250 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:44.235886   17250 start.go:728] Will try again in 5 seconds ...
	I0610 04:39:49.238124   17250 start.go:360] acquireMachinesLock for kindnet-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:39:49.238788   17250 start.go:364] duration metric: took 486.291µs to acquireMachinesLock for "kindnet-463000"
	I0610 04:39:49.239404   17250 start.go:93] Provisioning new machine with config: &{Name:kindnet-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:39:49.239679   17250 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:39:49.257377   17250 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:39:49.309185   17250 start.go:159] libmachine.API.Create for "kindnet-463000" (driver="qemu2")
	I0610 04:39:49.309254   17250 client.go:168] LocalClient.Create starting
	I0610 04:39:49.309355   17250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:39:49.309427   17250 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:49.309449   17250 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:49.309509   17250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:39:49.309553   17250 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:49.309571   17250 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:49.310090   17250 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:39:49.469938   17250 main.go:141] libmachine: Creating SSH key...
	I0610 04:39:49.555029   17250 main.go:141] libmachine: Creating Disk image...
	I0610 04:39:49.555039   17250 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:39:49.555222   17250 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2
	I0610 04:39:49.567813   17250 main.go:141] libmachine: STDOUT: 
	I0610 04:39:49.567837   17250 main.go:141] libmachine: STDERR: 
	I0610 04:39:49.567912   17250 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2 +20000M
	I0610 04:39:49.578877   17250 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:39:49.578899   17250 main.go:141] libmachine: STDERR: 
	I0610 04:39:49.578915   17250 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2
	I0610 04:39:49.578920   17250 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:39:49.578964   17250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d2:7a:80:e5:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kindnet-463000/disk.qcow2
	I0610 04:39:49.580636   17250 main.go:141] libmachine: STDOUT: 
	I0610 04:39:49.580650   17250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:39:49.580669   17250 client.go:171] duration metric: took 271.4065ms to LocalClient.Create
	I0610 04:39:51.582842   17250 start.go:128] duration metric: took 2.343124s to createHost
	I0610 04:39:51.582885   17250 start.go:83] releasing machines lock for "kindnet-463000", held for 2.344025458s
	W0610 04:39:51.583194   17250 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:51.595332   17250 out.go:177] 
	W0610 04:39:51.599039   17250 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:39:51.599078   17250 out.go:239] * 
	* 
	W0610 04:39:51.600387   17250 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:39:51.612967   17250 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.886831875s)

                                                
                                                
-- stdout --
	* [flannel-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-463000" primary control-plane node in "flannel-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:39:53.955264   17367 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:39:53.955389   17367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:39:53.955392   17367 out.go:304] Setting ErrFile to fd 2...
	I0610 04:39:53.955395   17367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:39:53.955539   17367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:39:53.956574   17367 out.go:298] Setting JSON to false
	I0610 04:39:53.972903   17367 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9564,"bootTime":1718010029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:39:53.972970   17367 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:39:53.979461   17367 out.go:177] * [flannel-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:39:53.986274   17367 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:39:53.987942   17367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:39:53.986347   17367 notify.go:220] Checking for updates...
	I0610 04:39:53.994220   17367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:39:53.997312   17367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:39:54.000268   17367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:39:54.003226   17367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:39:54.006647   17367 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:39:54.006723   17367 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:39:54.006773   17367 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:39:54.011207   17367 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:39:54.018278   17367 start.go:297] selected driver: qemu2
	I0610 04:39:54.018284   17367 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:39:54.018290   17367 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:39:54.020445   17367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:39:54.023256   17367 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:39:54.026365   17367 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:39:54.026393   17367 cni.go:84] Creating CNI manager for "flannel"
	I0610 04:39:54.026397   17367 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0610 04:39:54.026435   17367 start.go:340] cluster config:
	{Name:flannel-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:flannel-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:39:54.030802   17367 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:39:54.036223   17367 out.go:177] * Starting "flannel-463000" primary control-plane node in "flannel-463000" cluster
	I0610 04:39:54.040231   17367 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:39:54.040244   17367 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:39:54.040251   17367 cache.go:56] Caching tarball of preloaded images
	I0610 04:39:54.040316   17367 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:39:54.040322   17367 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:39:54.040401   17367 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/flannel-463000/config.json ...
	I0610 04:39:54.040413   17367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/flannel-463000/config.json: {Name:mkf64e43c297ddc003dd872ea46501d040109080 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:39:54.040636   17367 start.go:360] acquireMachinesLock for flannel-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:39:54.040672   17367 start.go:364] duration metric: took 29.459µs to acquireMachinesLock for "flannel-463000"
	I0610 04:39:54.040683   17367 start.go:93] Provisioning new machine with config: &{Name:flannel-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:39:54.040711   17367 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:39:54.048270   17367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:39:54.066166   17367 start.go:159] libmachine.API.Create for "flannel-463000" (driver="qemu2")
	I0610 04:39:54.066190   17367 client.go:168] LocalClient.Create starting
	I0610 04:39:54.066253   17367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:39:54.066289   17367 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:54.066302   17367 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:54.066348   17367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:39:54.066372   17367 main.go:141] libmachine: Decoding PEM data...
	I0610 04:39:54.066380   17367 main.go:141] libmachine: Parsing certificate...
	I0610 04:39:54.066822   17367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:39:54.213246   17367 main.go:141] libmachine: Creating SSH key...
	I0610 04:39:54.348534   17367 main.go:141] libmachine: Creating Disk image...
	I0610 04:39:54.348543   17367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:39:54.348804   17367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2
	I0610 04:39:54.364111   17367 main.go:141] libmachine: STDOUT: 
	I0610 04:39:54.364134   17367 main.go:141] libmachine: STDERR: 
	I0610 04:39:54.364188   17367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2 +20000M
	I0610 04:39:54.375313   17367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:39:54.375332   17367 main.go:141] libmachine: STDERR: 
	I0610 04:39:54.375344   17367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2
	I0610 04:39:54.375349   17367 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:39:54.375385   17367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:d1:63:cc:c4:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2
	I0610 04:39:54.377104   17367 main.go:141] libmachine: STDOUT: 
	I0610 04:39:54.377131   17367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:39:54.377152   17367 client.go:171] duration metric: took 310.953375ms to LocalClient.Create
	I0610 04:39:56.379347   17367 start.go:128] duration metric: took 2.338603208s to createHost
	I0610 04:39:56.379398   17367 start.go:83] releasing machines lock for "flannel-463000", held for 2.338699125s
	W0610 04:39:56.379461   17367 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:56.393773   17367 out.go:177] * Deleting "flannel-463000" in qemu2 ...
	W0610 04:39:56.422966   17367 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:39:56.422997   17367 start.go:728] Will try again in 5 seconds ...
	I0610 04:40:01.425184   17367 start.go:360] acquireMachinesLock for flannel-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:01.425713   17367 start.go:364] duration metric: took 446.625µs to acquireMachinesLock for "flannel-463000"
	I0610 04:40:01.425866   17367 start.go:93] Provisioning new machine with config: &{Name:flannel-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:01.426194   17367 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:01.441776   17367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:01.491727   17367 start.go:159] libmachine.API.Create for "flannel-463000" (driver="qemu2")
	I0610 04:40:01.491771   17367 client.go:168] LocalClient.Create starting
	I0610 04:40:01.491869   17367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:01.491926   17367 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:01.491943   17367 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:01.492011   17367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:01.492055   17367 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:01.492073   17367 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:01.492589   17367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:01.649087   17367 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:01.742573   17367 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:01.742579   17367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:01.742764   17367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2
	I0610 04:40:01.755251   17367 main.go:141] libmachine: STDOUT: 
	I0610 04:40:01.755271   17367 main.go:141] libmachine: STDERR: 
	I0610 04:40:01.755324   17367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2 +20000M
	I0610 04:40:01.766076   17367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:01.766094   17367 main.go:141] libmachine: STDERR: 
	I0610 04:40:01.766108   17367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2
	I0610 04:40:01.766112   17367 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:01.766150   17367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:14:f4:af:14:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/flannel-463000/disk.qcow2
	I0610 04:40:01.767835   17367 main.go:141] libmachine: STDOUT: 
	I0610 04:40:01.767855   17367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:01.767869   17367 client.go:171] duration metric: took 276.089791ms to LocalClient.Create
	I0610 04:40:03.770060   17367 start.go:128] duration metric: took 2.34382s to createHost
	I0610 04:40:03.770111   17367 start.go:83] releasing machines lock for "flannel-463000", held for 2.344357s
	W0610 04:40:03.770545   17367 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:03.782945   17367 out.go:177] 
	W0610 04:40:03.788058   17367 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:40:03.788084   17367 out.go:239] * 
	* 
	W0610 04:40:03.790711   17367 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:40:03.799976   17367 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.948047875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-463000" primary control-plane node in "enable-default-cni-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:40:06.194054   17486 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:40:06.194233   17486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:06.194238   17486 out.go:304] Setting ErrFile to fd 2...
	I0610 04:40:06.194241   17486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:06.194395   17486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:40:06.195629   17486 out.go:298] Setting JSON to false
	I0610 04:40:06.211871   17486 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9577,"bootTime":1718010029,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:40:06.211931   17486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:40:06.218948   17486 out.go:177] * [enable-default-cni-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:40:06.225943   17486 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:40:06.230888   17486 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:40:06.225995   17486 notify.go:220] Checking for updates...
	I0610 04:40:06.233877   17486 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:40:06.236925   17486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:40:06.240902   17486 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:40:06.243932   17486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:40:06.247303   17486 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:06.247375   17486 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:06.247423   17486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:40:06.251833   17486 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:40:06.258884   17486 start.go:297] selected driver: qemu2
	I0610 04:40:06.258889   17486 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:40:06.258894   17486 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:40:06.261038   17486 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:40:06.263885   17486 out.go:177] * Automatically selected the socket_vmnet network
	E0610 04:40:06.266965   17486 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0610 04:40:06.266979   17486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:40:06.267009   17486 cni.go:84] Creating CNI manager for "bridge"
	I0610 04:40:06.267013   17486 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:40:06.267044   17486 start.go:340] cluster config:
	{Name:enable-default-cni-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:40:06.271432   17486 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:40:06.278902   17486 out.go:177] * Starting "enable-default-cni-463000" primary control-plane node in "enable-default-cni-463000" cluster
	I0610 04:40:06.282898   17486 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:40:06.282914   17486 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:40:06.282930   17486 cache.go:56] Caching tarball of preloaded images
	I0610 04:40:06.283000   17486 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:40:06.283006   17486 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:40:06.283084   17486 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/enable-default-cni-463000/config.json ...
	I0610 04:40:06.283095   17486 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/enable-default-cni-463000/config.json: {Name:mkb2bf245f0763e730ee8f5a7c3fa5c3ce7f26e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:40:06.283307   17486 start.go:360] acquireMachinesLock for enable-default-cni-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:06.283341   17486 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "enable-default-cni-463000"
	I0610 04:40:06.283352   17486 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:06.283377   17486 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:06.290910   17486 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:06.308558   17486 start.go:159] libmachine.API.Create for "enable-default-cni-463000" (driver="qemu2")
	I0610 04:40:06.308592   17486 client.go:168] LocalClient.Create starting
	I0610 04:40:06.308662   17486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:06.308699   17486 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:06.308711   17486 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:06.308759   17486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:06.308782   17486 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:06.308790   17486 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:06.309204   17486 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:06.452609   17486 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:06.644388   17486 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:06.644395   17486 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:06.644592   17486 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2
	I0610 04:40:06.657601   17486 main.go:141] libmachine: STDOUT: 
	I0610 04:40:06.657626   17486 main.go:141] libmachine: STDERR: 
	I0610 04:40:06.657697   17486 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2 +20000M
	I0610 04:40:06.668685   17486 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:06.668700   17486 main.go:141] libmachine: STDERR: 
	I0610 04:40:06.668724   17486 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2
	I0610 04:40:06.668729   17486 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:06.668764   17486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:da:26:fe:99:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2
	I0610 04:40:06.670395   17486 main.go:141] libmachine: STDOUT: 
	I0610 04:40:06.670413   17486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:06.670431   17486 client.go:171] duration metric: took 361.83075ms to LocalClient.Create
	I0610 04:40:08.672641   17486 start.go:128] duration metric: took 2.389229875s to createHost
	I0610 04:40:08.672692   17486 start.go:83] releasing machines lock for "enable-default-cni-463000", held for 2.389324792s
	W0610 04:40:08.672757   17486 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:08.690069   17486 out.go:177] * Deleting "enable-default-cni-463000" in qemu2 ...
	W0610 04:40:08.718085   17486 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:08.718104   17486 start.go:728] Will try again in 5 seconds ...
	I0610 04:40:13.720567   17486 start.go:360] acquireMachinesLock for enable-default-cni-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:13.721045   17486 start.go:364] duration metric: took 363.042µs to acquireMachinesLock for "enable-default-cni-463000"
	I0610 04:40:13.721195   17486 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:13.721477   17486 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:13.732987   17486 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:13.782987   17486 start.go:159] libmachine.API.Create for "enable-default-cni-463000" (driver="qemu2")
	I0610 04:40:13.783036   17486 client.go:168] LocalClient.Create starting
	I0610 04:40:13.783151   17486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:13.783215   17486 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:13.783230   17486 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:13.783292   17486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:13.783336   17486 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:13.783353   17486 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:13.783872   17486 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:13.939795   17486 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:14.041185   17486 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:14.041190   17486 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:14.041354   17486 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2
	I0610 04:40:14.053756   17486 main.go:141] libmachine: STDOUT: 
	I0610 04:40:14.053785   17486 main.go:141] libmachine: STDERR: 
	I0610 04:40:14.053847   17486 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2 +20000M
	I0610 04:40:14.064651   17486 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:14.064681   17486 main.go:141] libmachine: STDERR: 
	I0610 04:40:14.064694   17486 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2
	I0610 04:40:14.064698   17486 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:14.064741   17486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:cc:27:30:52:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/enable-default-cni-463000/disk.qcow2
	I0610 04:40:14.066497   17486 main.go:141] libmachine: STDOUT: 
	I0610 04:40:14.066515   17486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:14.066528   17486 client.go:171] duration metric: took 283.485542ms to LocalClient.Create
	I0610 04:40:16.068760   17486 start.go:128] duration metric: took 2.3472095s to createHost
	I0610 04:40:16.068890   17486 start.go:83] releasing machines lock for "enable-default-cni-463000", held for 2.347802s
	W0610 04:40:16.069314   17486 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:16.078961   17486 out.go:177] 
	W0610 04:40:16.086104   17486 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:40:16.086132   17486 out.go:239] * 
	* 
	W0610 04:40:16.088899   17486 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:40:16.098946   17486 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.90608025s)

                                                
                                                
-- stdout --
	* [bridge-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-463000" primary control-plane node in "bridge-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:40:18.337173   17602 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:40:18.337299   17602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:18.337302   17602 out.go:304] Setting ErrFile to fd 2...
	I0610 04:40:18.337304   17602 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:18.337431   17602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:40:18.338530   17602 out.go:298] Setting JSON to false
	I0610 04:40:18.354918   17602 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9589,"bootTime":1718010029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:40:18.354997   17602 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:40:18.360142   17602 out.go:177] * [bridge-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:40:18.367037   17602 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:40:18.367070   17602 notify.go:220] Checking for updates...
	I0610 04:40:18.371052   17602 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:40:18.375116   17602 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:40:18.378000   17602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:40:18.381000   17602 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:40:18.384064   17602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:40:18.387374   17602 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:18.387446   17602 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:18.387487   17602 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:40:18.392015   17602 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:40:18.399011   17602 start.go:297] selected driver: qemu2
	I0610 04:40:18.399019   17602 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:40:18.399025   17602 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:40:18.401314   17602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:40:18.405020   17602 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:40:18.408118   17602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:40:18.408153   17602 cni.go:84] Creating CNI manager for "bridge"
	I0610 04:40:18.408157   17602 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:40:18.408191   17602 start.go:340] cluster config:
	{Name:bridge-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:bridge-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:40:18.412691   17602 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:40:18.421051   17602 out.go:177] * Starting "bridge-463000" primary control-plane node in "bridge-463000" cluster
	I0610 04:40:18.424879   17602 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:40:18.424902   17602 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:40:18.424910   17602 cache.go:56] Caching tarball of preloaded images
	I0610 04:40:18.424976   17602 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:40:18.424982   17602 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:40:18.425053   17602 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/bridge-463000/config.json ...
	I0610 04:40:18.425069   17602 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/bridge-463000/config.json: {Name:mk51c09fe5fd2fa733d1b72f9b7acdf12227c11c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:40:18.425449   17602 start.go:360] acquireMachinesLock for bridge-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:18.425482   17602 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "bridge-463000"
	I0610 04:40:18.425491   17602 start.go:93] Provisioning new machine with config: &{Name:bridge-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:18.425525   17602 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:18.433009   17602 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:18.450619   17602 start.go:159] libmachine.API.Create for "bridge-463000" (driver="qemu2")
	I0610 04:40:18.450646   17602 client.go:168] LocalClient.Create starting
	I0610 04:40:18.450701   17602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:18.450736   17602 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:18.450751   17602 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:18.450794   17602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:18.450817   17602 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:18.450824   17602 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:18.451282   17602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:18.596760   17602 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:18.664189   17602 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:18.664195   17602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:18.664384   17602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2
	I0610 04:40:18.676938   17602 main.go:141] libmachine: STDOUT: 
	I0610 04:40:18.676959   17602 main.go:141] libmachine: STDERR: 
	I0610 04:40:18.677010   17602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2 +20000M
	I0610 04:40:18.688124   17602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:18.688140   17602 main.go:141] libmachine: STDERR: 
	I0610 04:40:18.688155   17602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2
	I0610 04:40:18.688158   17602 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:18.688191   17602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a7:9c:f5:d8:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2
	I0610 04:40:18.689885   17602 main.go:141] libmachine: STDOUT: 
	I0610 04:40:18.689905   17602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:18.689925   17602 client.go:171] duration metric: took 239.271542ms to LocalClient.Create
	I0610 04:40:20.692129   17602 start.go:128] duration metric: took 2.26657125s to createHost
	I0610 04:40:20.692197   17602 start.go:83] releasing machines lock for "bridge-463000", held for 2.26669s
	W0610 04:40:20.692250   17602 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:20.704946   17602 out.go:177] * Deleting "bridge-463000" in qemu2 ...
	W0610 04:40:20.737893   17602 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:20.737923   17602 start.go:728] Will try again in 5 seconds ...
	I0610 04:40:25.740244   17602 start.go:360] acquireMachinesLock for bridge-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:25.740792   17602 start.go:364] duration metric: took 426.833µs to acquireMachinesLock for "bridge-463000"
	I0610 04:40:25.740918   17602 start.go:93] Provisioning new machine with config: &{Name:bridge-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:25.741238   17602 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:25.756780   17602 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:25.807637   17602 start.go:159] libmachine.API.Create for "bridge-463000" (driver="qemu2")
	I0610 04:40:25.807688   17602 client.go:168] LocalClient.Create starting
	I0610 04:40:25.807809   17602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:25.807880   17602 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:25.807895   17602 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:25.807957   17602 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:25.808000   17602 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:25.808015   17602 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:25.808556   17602 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:25.966333   17602 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:26.139362   17602 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:26.139368   17602 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:26.139556   17602 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2
	I0610 04:40:26.152372   17602 main.go:141] libmachine: STDOUT: 
	I0610 04:40:26.152399   17602 main.go:141] libmachine: STDERR: 
	I0610 04:40:26.152453   17602 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2 +20000M
	I0610 04:40:26.163258   17602 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:26.163276   17602 main.go:141] libmachine: STDERR: 
	I0610 04:40:26.163287   17602 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2
	I0610 04:40:26.163291   17602 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:26.163349   17602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:57:80:46:00:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/bridge-463000/disk.qcow2
	I0610 04:40:26.165090   17602 main.go:141] libmachine: STDOUT: 
	I0610 04:40:26.165115   17602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:26.165128   17602 client.go:171] duration metric: took 357.432625ms to LocalClient.Create
	I0610 04:40:28.167367   17602 start.go:128] duration metric: took 2.426071208s to createHost
	I0610 04:40:28.167466   17602 start.go:83] releasing machines lock for "bridge-463000", held for 2.426629458s
	W0610 04:40:28.168055   17602 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:28.182744   17602 out.go:177] 
	W0610 04:40:28.186799   17602 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:40:28.186825   17602 out.go:239] * 
	* 
	W0610 04:40:28.189388   17602 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:40:28.201747   17602 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.863798708s)

                                                
                                                
-- stdout --
	* [kubenet-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-463000" primary control-plane node in "kubenet-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:40:30.450862   17717 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:40:30.450989   17717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:30.450992   17717 out.go:304] Setting ErrFile to fd 2...
	I0610 04:40:30.450994   17717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:30.451124   17717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:40:30.452141   17717 out.go:298] Setting JSON to false
	I0610 04:40:30.468575   17717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9601,"bootTime":1718010029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:40:30.468643   17717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:40:30.473983   17717 out.go:177] * [kubenet-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:40:30.479923   17717 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:40:30.479962   17717 notify.go:220] Checking for updates...
	I0610 04:40:30.483932   17717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:40:30.486941   17717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:40:30.489969   17717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:40:30.493924   17717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:40:30.504572   17717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:40:30.508269   17717 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:30.508348   17717 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:30.508406   17717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:40:30.512908   17717 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:40:30.519929   17717 start.go:297] selected driver: qemu2
	I0610 04:40:30.519938   17717 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:40:30.519944   17717 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:40:30.522287   17717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:40:30.525962   17717 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:40:30.529052   17717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:40:30.529099   17717 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0610 04:40:30.529135   17717 start.go:340] cluster config:
	{Name:kubenet-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubenet-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:40:30.533842   17717 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:40:30.540913   17717 out.go:177] * Starting "kubenet-463000" primary control-plane node in "kubenet-463000" cluster
	I0610 04:40:30.544921   17717 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:40:30.544937   17717 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:40:30.544946   17717 cache.go:56] Caching tarball of preloaded images
	I0610 04:40:30.545006   17717 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:40:30.545012   17717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:40:30.545092   17717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/kubenet-463000/config.json ...
	I0610 04:40:30.545103   17717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/kubenet-463000/config.json: {Name:mkbd447404197debe758be32a4e06c21260e3a81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:40:30.545352   17717 start.go:360] acquireMachinesLock for kubenet-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:30.545391   17717 start.go:364] duration metric: took 32.375µs to acquireMachinesLock for "kubenet-463000"
	I0610 04:40:30.545404   17717 start.go:93] Provisioning new machine with config: &{Name:kubenet-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:30.545448   17717 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:30.552890   17717 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:30.571900   17717 start.go:159] libmachine.API.Create for "kubenet-463000" (driver="qemu2")
	I0610 04:40:30.571932   17717 client.go:168] LocalClient.Create starting
	I0610 04:40:30.572000   17717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:30.572033   17717 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:30.572047   17717 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:30.572097   17717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:30.572123   17717 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:30.572132   17717 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:30.572596   17717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:30.717867   17717 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:30.792004   17717 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:30.792013   17717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:30.792175   17717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2
	I0610 04:40:30.804784   17717 main.go:141] libmachine: STDOUT: 
	I0610 04:40:30.804802   17717 main.go:141] libmachine: STDERR: 
	I0610 04:40:30.804852   17717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2 +20000M
	I0610 04:40:30.815648   17717 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:30.815676   17717 main.go:141] libmachine: STDERR: 
	I0610 04:40:30.815694   17717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2
	I0610 04:40:30.815699   17717 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:30.815730   17717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:49:4a:16:f1:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2
	I0610 04:40:30.817401   17717 main.go:141] libmachine: STDOUT: 
	I0610 04:40:30.817419   17717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:30.817445   17717 client.go:171] duration metric: took 245.504416ms to LocalClient.Create
	I0610 04:40:32.819641   17717 start.go:128] duration metric: took 2.274158708s to createHost
	I0610 04:40:32.819699   17717 start.go:83] releasing machines lock for "kubenet-463000", held for 2.274281167s
	W0610 04:40:32.819752   17717 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:32.832936   17717 out.go:177] * Deleting "kubenet-463000" in qemu2 ...
	W0610 04:40:32.862145   17717 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:32.862227   17717 start.go:728] Will try again in 5 seconds ...
	I0610 04:40:37.864526   17717 start.go:360] acquireMachinesLock for kubenet-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:37.864958   17717 start.go:364] duration metric: took 320.166µs to acquireMachinesLock for "kubenet-463000"
	I0610 04:40:37.865073   17717 start.go:93] Provisioning new machine with config: &{Name:kubenet-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:37.865393   17717 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:37.875941   17717 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:37.926954   17717 start.go:159] libmachine.API.Create for "kubenet-463000" (driver="qemu2")
	I0610 04:40:37.927007   17717 client.go:168] LocalClient.Create starting
	I0610 04:40:37.927112   17717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:37.927180   17717 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:37.927198   17717 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:37.927269   17717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:37.927312   17717 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:37.927322   17717 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:37.927842   17717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:38.082481   17717 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:38.215366   17717 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:38.215373   17717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:38.215579   17717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2
	I0610 04:40:38.228177   17717 main.go:141] libmachine: STDOUT: 
	I0610 04:40:38.228200   17717 main.go:141] libmachine: STDERR: 
	I0610 04:40:38.228274   17717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2 +20000M
	I0610 04:40:38.239333   17717 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:38.239349   17717 main.go:141] libmachine: STDERR: 
	I0610 04:40:38.239361   17717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2
	I0610 04:40:38.239368   17717 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:38.239407   17717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:e7:94:34:ab:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/kubenet-463000/disk.qcow2
	I0610 04:40:38.241071   17717 main.go:141] libmachine: STDOUT: 
	I0610 04:40:38.241093   17717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:38.241107   17717 client.go:171] duration metric: took 314.091292ms to LocalClient.Create
	I0610 04:40:40.243289   17717 start.go:128] duration metric: took 2.377851959s to createHost
	I0610 04:40:40.243357   17717 start.go:83] releasing machines lock for "kubenet-463000", held for 2.378356875s
	W0610 04:40:40.243713   17717 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:40.253343   17717 out.go:177] 
	W0610 04:40:40.260458   17717 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:40:40.260481   17717 out.go:239] * 
	* 
	W0610 04:40:40.263529   17717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:40:40.271449   17717 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.766269s)

                                                
                                                
-- stdout --
	* [custom-flannel-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-463000" primary control-plane node in "custom-flannel-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:40:42.514052   17831 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:40:42.514196   17831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:42.514199   17831 out.go:304] Setting ErrFile to fd 2...
	I0610 04:40:42.514202   17831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:42.514325   17831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:40:42.515347   17831 out.go:298] Setting JSON to false
	I0610 04:40:42.531827   17831 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9613,"bootTime":1718010029,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:40:42.531885   17831 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:40:42.536700   17831 out.go:177] * [custom-flannel-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:40:42.543732   17831 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:40:42.546671   17831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:40:42.543790   17831 notify.go:220] Checking for updates...
	I0610 04:40:42.550628   17831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:40:42.553696   17831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:40:42.557619   17831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:40:42.560652   17831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:40:42.564086   17831 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:42.564168   17831 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:42.564214   17831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:40:42.568645   17831 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:40:42.575670   17831 start.go:297] selected driver: qemu2
	I0610 04:40:42.575677   17831 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:40:42.575682   17831 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:40:42.578031   17831 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:40:42.581619   17831 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:40:42.584758   17831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:40:42.584773   17831 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0610 04:40:42.584780   17831 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0610 04:40:42.584810   17831 start.go:340] cluster config:
	{Name:custom-flannel-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:40:42.589317   17831 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:40:42.596658   17831 out.go:177] * Starting "custom-flannel-463000" primary control-plane node in "custom-flannel-463000" cluster
	I0610 04:40:42.600709   17831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:40:42.600725   17831 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:40:42.600736   17831 cache.go:56] Caching tarball of preloaded images
	I0610 04:40:42.600803   17831 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:40:42.600809   17831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:40:42.600881   17831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/custom-flannel-463000/config.json ...
	I0610 04:40:42.600897   17831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/custom-flannel-463000/config.json: {Name:mkb3e75c2113eecfc3a2f298f577f4ca18470194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:40:42.601127   17831 start.go:360] acquireMachinesLock for custom-flannel-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:42.601165   17831 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "custom-flannel-463000"
	I0610 04:40:42.601176   17831 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:42.601205   17831 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:42.608662   17831 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:42.627118   17831 start.go:159] libmachine.API.Create for "custom-flannel-463000" (driver="qemu2")
	I0610 04:40:42.627150   17831 client.go:168] LocalClient.Create starting
	I0610 04:40:42.627213   17831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:42.627248   17831 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:42.627263   17831 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:42.627305   17831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:42.627328   17831 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:42.627334   17831 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:42.627787   17831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:42.772871   17831 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:42.845870   17831 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:42.845876   17831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:42.846053   17831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2
	I0610 04:40:42.858524   17831 main.go:141] libmachine: STDOUT: 
	I0610 04:40:42.858542   17831 main.go:141] libmachine: STDERR: 
	I0610 04:40:42.858603   17831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2 +20000M
	I0610 04:40:42.869623   17831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:42.869641   17831 main.go:141] libmachine: STDERR: 
	I0610 04:40:42.869658   17831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2
	I0610 04:40:42.869663   17831 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:42.869706   17831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:4e:86:9d:fa:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2
	I0610 04:40:42.871508   17831 main.go:141] libmachine: STDOUT: 
	I0610 04:40:42.871525   17831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:42.871541   17831 client.go:171] duration metric: took 244.383375ms to LocalClient.Create
	I0610 04:40:44.873820   17831 start.go:128] duration metric: took 2.272560167s to createHost
	I0610 04:40:44.873900   17831 start.go:83] releasing machines lock for "custom-flannel-463000", held for 2.272708833s
	W0610 04:40:44.873958   17831 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:44.885161   17831 out.go:177] * Deleting "custom-flannel-463000" in qemu2 ...
	W0610 04:40:44.916861   17831 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:44.916896   17831 start.go:728] Will try again in 5 seconds ...
	I0610 04:40:49.919143   17831 start.go:360] acquireMachinesLock for custom-flannel-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:49.919601   17831 start.go:364] duration metric: took 363.25µs to acquireMachinesLock for "custom-flannel-463000"
	I0610 04:40:49.919722   17831 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:49.920026   17831 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:49.936812   17831 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:49.986670   17831 start.go:159] libmachine.API.Create for "custom-flannel-463000" (driver="qemu2")
	I0610 04:40:49.986710   17831 client.go:168] LocalClient.Create starting
	I0610 04:40:49.986843   17831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:49.986907   17831 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:49.986922   17831 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:49.986991   17831 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:49.987037   17831 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:49.987051   17831 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:49.987649   17831 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:50.142971   17831 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:50.180649   17831 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:50.180654   17831 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:50.180826   17831 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2
	I0610 04:40:50.193520   17831 main.go:141] libmachine: STDOUT: 
	I0610 04:40:50.193542   17831 main.go:141] libmachine: STDERR: 
	I0610 04:40:50.193603   17831 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2 +20000M
	I0610 04:40:50.204414   17831 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:50.204435   17831 main.go:141] libmachine: STDERR: 
	I0610 04:40:50.204446   17831 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2
	I0610 04:40:50.204451   17831 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:50.204493   17831 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:f9:52:29:1f:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/custom-flannel-463000/disk.qcow2
	I0610 04:40:50.206188   17831 main.go:141] libmachine: STDOUT: 
	I0610 04:40:50.206203   17831 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:50.206217   17831 client.go:171] duration metric: took 219.491042ms to LocalClient.Create
	I0610 04:40:52.208410   17831 start.go:128] duration metric: took 2.288324833s to createHost
	I0610 04:40:52.208470   17831 start.go:83] releasing machines lock for "custom-flannel-463000", held for 2.288825292s
	W0610 04:40:52.208885   17831 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:52.224475   17831 out.go:177] 
	W0610 04:40:52.229691   17831 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:40:52.229718   17831 out.go:239] * 
	* 
	W0610 04:40:52.232349   17831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:40:52.238522   17831 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.016479s)

                                                
                                                
-- stdout --
	* [calico-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-463000" primary control-plane node in "calico-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:40:54.671732   17954 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:40:54.671876   17954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:54.671879   17954 out.go:304] Setting ErrFile to fd 2...
	I0610 04:40:54.671882   17954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:40:54.672005   17954 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:40:54.673065   17954 out.go:298] Setting JSON to false
	I0610 04:40:54.689322   17954 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9625,"bootTime":1718010029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:40:54.689382   17954 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:40:54.693831   17954 out.go:177] * [calico-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:40:54.700530   17954 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:40:54.703457   17954 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:40:54.700585   17954 notify.go:220] Checking for updates...
	I0610 04:40:54.707487   17954 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:40:54.710486   17954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:40:54.714468   17954 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:40:54.717424   17954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:40:54.720881   17954 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:54.720951   17954 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:40:54.720998   17954 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:40:54.725455   17954 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:40:54.732384   17954 start.go:297] selected driver: qemu2
	I0610 04:40:54.732389   17954 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:40:54.732395   17954 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:40:54.734709   17954 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:40:54.738337   17954 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:40:54.741540   17954 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:40:54.741563   17954 cni.go:84] Creating CNI manager for "calico"
	I0610 04:40:54.741567   17954 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0610 04:40:54.741604   17954 start.go:340] cluster config:
	{Name:calico-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:calico-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:40:54.746136   17954 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:40:54.754394   17954 out.go:177] * Starting "calico-463000" primary control-plane node in "calico-463000" cluster
	I0610 04:40:54.758435   17954 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:40:54.758451   17954 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:40:54.758462   17954 cache.go:56] Caching tarball of preloaded images
	I0610 04:40:54.758522   17954 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:40:54.758527   17954 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:40:54.758609   17954 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/calico-463000/config.json ...
	I0610 04:40:54.758620   17954 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/calico-463000/config.json: {Name:mkd21bbbc6a25d713d92d4fcb5e4773a3adbd4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:40:54.759019   17954 start.go:360] acquireMachinesLock for calico-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:40:54.759055   17954 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "calico-463000"
	I0610 04:40:54.759066   17954 start.go:93] Provisioning new machine with config: &{Name:calico-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:40:54.759093   17954 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:40:54.763358   17954 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:40:54.781515   17954 start.go:159] libmachine.API.Create for "calico-463000" (driver="qemu2")
	I0610 04:40:54.781541   17954 client.go:168] LocalClient.Create starting
	I0610 04:40:54.781613   17954 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:40:54.781648   17954 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:54.781661   17954 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:54.781705   17954 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:40:54.781734   17954 main.go:141] libmachine: Decoding PEM data...
	I0610 04:40:54.781743   17954 main.go:141] libmachine: Parsing certificate...
	I0610 04:40:54.782241   17954 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:40:54.927604   17954 main.go:141] libmachine: Creating SSH key...
	I0610 04:40:55.159299   17954 main.go:141] libmachine: Creating Disk image...
	I0610 04:40:55.159307   17954 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:40:55.159532   17954 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2
	I0610 04:40:55.172836   17954 main.go:141] libmachine: STDOUT: 
	I0610 04:40:55.172858   17954 main.go:141] libmachine: STDERR: 
	I0610 04:40:55.172904   17954 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2 +20000M
	I0610 04:40:55.184000   17954 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:40:55.184017   17954 main.go:141] libmachine: STDERR: 
	I0610 04:40:55.184029   17954 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2
	I0610 04:40:55.184034   17954 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:40:55.184068   17954 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d3:e6:78:85:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2
	I0610 04:40:55.185919   17954 main.go:141] libmachine: STDOUT: 
	I0610 04:40:55.185943   17954 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:40:55.185960   17954 client.go:171] duration metric: took 405.126542ms to LocalClient.Create
	I0610 04:40:57.184811   17954 start.go:128] duration metric: took 2.429784916s to createHost
	I0610 04:40:57.184879   17954 start.go:83] releasing machines lock for "calico-463000", held for 2.429902833s
	W0610 04:40:57.184968   17954 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:57.200263   17954 out.go:177] * Deleting "calico-463000" in qemu2 ...
	W0610 04:40:57.229470   17954 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:40:57.229500   17954 start.go:728] Will try again in 5 seconds ...
	I0610 04:41:02.225187   17954 start.go:360] acquireMachinesLock for calico-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:02.225724   17954 start.go:364] duration metric: took 370.458µs to acquireMachinesLock for "calico-463000"
	I0610 04:41:02.225848   17954 start.go:93] Provisioning new machine with config: &{Name:calico-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:41:02.226182   17954 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:41:02.237786   17954 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:41:02.288825   17954 start.go:159] libmachine.API.Create for "calico-463000" (driver="qemu2")
	I0610 04:41:02.288870   17954 client.go:168] LocalClient.Create starting
	I0610 04:41:02.288982   17954 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:41:02.289052   17954 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:02.289070   17954 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:02.289140   17954 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:41:02.289183   17954 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:02.289202   17954 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:02.289746   17954 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:41:02.443948   17954 main.go:141] libmachine: Creating SSH key...
	I0610 04:41:02.578079   17954 main.go:141] libmachine: Creating Disk image...
	I0610 04:41:02.578085   17954 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:41:02.578256   17954 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2
	I0610 04:41:02.590928   17954 main.go:141] libmachine: STDOUT: 
	I0610 04:41:02.590944   17954 main.go:141] libmachine: STDERR: 
	I0610 04:41:02.591007   17954 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2 +20000M
	I0610 04:41:02.601902   17954 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:41:02.601920   17954 main.go:141] libmachine: STDERR: 
	I0610 04:41:02.601928   17954 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2
	I0610 04:41:02.601931   17954 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:41:02.601969   17954 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:c5:4d:0b:17:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/calico-463000/disk.qcow2
	I0610 04:41:02.603754   17954 main.go:141] libmachine: STDOUT: 
	I0610 04:41:02.603771   17954 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:02.603785   17954 client.go:171] duration metric: took 315.262958ms to LocalClient.Create
	I0610 04:41:04.603958   17954 start.go:128] duration metric: took 2.380228s to createHost
	I0610 04:41:04.604027   17954 start.go:83] releasing machines lock for "calico-463000", held for 2.380770708s
	W0610 04:41:04.604394   17954 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:04.616905   17954 out.go:177] 
	W0610 04:41:04.620930   17954 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:41:04.621032   17954 out.go:239] * 
	* 
	W0610 04:41:04.623564   17954 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:41:04.631856   17954 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-463000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.773953375s)

                                                
                                                
-- stdout --
	* [false-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-463000" primary control-plane node in "false-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:41:07.090934   18075 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:41:07.091050   18075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:07.091054   18075 out.go:304] Setting ErrFile to fd 2...
	I0610 04:41:07.091056   18075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:07.091168   18075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:41:07.092260   18075 out.go:298] Setting JSON to false
	I0610 04:41:07.108330   18075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9638,"bootTime":1718010029,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:41:07.108383   18075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:41:07.113999   18075 out.go:177] * [false-463000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:41:07.121029   18075 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:41:07.125008   18075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:41:07.121091   18075 notify.go:220] Checking for updates...
	I0610 04:41:07.130929   18075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:41:07.134927   18075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:41:07.137988   18075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:41:07.140900   18075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:41:07.144265   18075 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:07.144342   18075 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:07.144389   18075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:41:07.148965   18075 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:41:07.155948   18075 start.go:297] selected driver: qemu2
	I0610 04:41:07.155953   18075 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:41:07.155958   18075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:41:07.158274   18075 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:41:07.161956   18075 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:41:07.166072   18075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:41:07.166108   18075 cni.go:84] Creating CNI manager for "false"
	I0610 04:41:07.166142   18075 start.go:340] cluster config:
	{Name:false-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:false-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:41:07.170868   18075 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:07.178907   18075 out.go:177] * Starting "false-463000" primary control-plane node in "false-463000" cluster
	I0610 04:41:07.182907   18075 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:41:07.182927   18075 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:41:07.182936   18075 cache.go:56] Caching tarball of preloaded images
	I0610 04:41:07.182992   18075 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:41:07.182997   18075 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:41:07.183068   18075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/false-463000/config.json ...
	I0610 04:41:07.183080   18075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/false-463000/config.json: {Name:mkca2933e00cfef4743c434128712fc2a3950113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:41:07.183331   18075 start.go:360] acquireMachinesLock for false-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:07.183367   18075 start.go:364] duration metric: took 29.834µs to acquireMachinesLock for "false-463000"
	I0610 04:41:07.183378   18075 start.go:93] Provisioning new machine with config: &{Name:false-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:41:07.183409   18075 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:41:07.190899   18075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:41:07.209541   18075 start.go:159] libmachine.API.Create for "false-463000" (driver="qemu2")
	I0610 04:41:07.209566   18075 client.go:168] LocalClient.Create starting
	I0610 04:41:07.209635   18075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:41:07.209667   18075 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:07.209679   18075 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:07.209722   18075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:41:07.209745   18075 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:07.209754   18075 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:07.210099   18075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:41:07.352881   18075 main.go:141] libmachine: Creating SSH key...
	I0610 04:41:07.388803   18075 main.go:141] libmachine: Creating Disk image...
	I0610 04:41:07.388809   18075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:41:07.388970   18075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2
	I0610 04:41:07.401053   18075 main.go:141] libmachine: STDOUT: 
	I0610 04:41:07.401074   18075 main.go:141] libmachine: STDERR: 
	I0610 04:41:07.401129   18075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2 +20000M
	I0610 04:41:07.412540   18075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:41:07.412556   18075 main.go:141] libmachine: STDERR: 
	I0610 04:41:07.412593   18075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2
	I0610 04:41:07.412598   18075 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:41:07.412625   18075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:36:55:43:5e:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2
	I0610 04:41:07.414300   18075 main.go:141] libmachine: STDOUT: 
	I0610 04:41:07.414314   18075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:07.414333   18075 client.go:171] duration metric: took 204.928292ms to LocalClient.Create
	I0610 04:41:09.415038   18075 start.go:128] duration metric: took 2.23331025s to createHost
	I0610 04:41:09.415083   18075 start.go:83] releasing machines lock for "false-463000", held for 2.233409458s
	W0610 04:41:09.415144   18075 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:09.426011   18075 out.go:177] * Deleting "false-463000" in qemu2 ...
	W0610 04:41:09.457680   18075 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:09.457706   18075 start.go:728] Will try again in 5 seconds ...
	I0610 04:41:14.456875   18075 start.go:360] acquireMachinesLock for false-463000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:14.457357   18075 start.go:364] duration metric: took 365.792µs to acquireMachinesLock for "false-463000"
	I0610 04:41:14.457473   18075 start.go:93] Provisioning new machine with config: &{Name:false-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:41:14.457752   18075 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:41:14.467203   18075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 04:41:14.517899   18075 start.go:159] libmachine.API.Create for "false-463000" (driver="qemu2")
	I0610 04:41:14.517955   18075 client.go:168] LocalClient.Create starting
	I0610 04:41:14.518107   18075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:41:14.518188   18075 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:14.518210   18075 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:14.518293   18075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:41:14.518339   18075 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:14.518351   18075 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:14.518884   18075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:41:14.674862   18075 main.go:141] libmachine: Creating SSH key...
	I0610 04:41:14.758891   18075 main.go:141] libmachine: Creating Disk image...
	I0610 04:41:14.758896   18075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:41:14.759074   18075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2
	I0610 04:41:14.771393   18075 main.go:141] libmachine: STDOUT: 
	I0610 04:41:14.771415   18075 main.go:141] libmachine: STDERR: 
	I0610 04:41:14.771479   18075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2 +20000M
	I0610 04:41:14.782451   18075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:41:14.782471   18075 main.go:141] libmachine: STDERR: 
	I0610 04:41:14.782482   18075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2
	I0610 04:41:14.782487   18075 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:41:14.782522   18075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:7f:1c:14:83:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/false-463000/disk.qcow2
	I0610 04:41:14.784172   18075 main.go:141] libmachine: STDOUT: 
	I0610 04:41:14.784189   18075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:14.784201   18075 client.go:171] duration metric: took 266.3785ms to LocalClient.Create
	I0610 04:41:16.785426   18075 start.go:128] duration metric: took 2.328727458s to createHost
	I0610 04:41:16.785480   18075 start.go:83] releasing machines lock for "false-463000", held for 2.32920875s
	W0610 04:41:16.785915   18075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:16.799528   18075 out.go:177] 
	W0610 04:41:16.803571   18075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:41:16.803601   18075 out.go:239] * 
	* 
	W0610 04:41:16.806488   18075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:41:16.817444   18075 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-278000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-278000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.907681833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-278000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-278000" primary control-plane node in "old-k8s-version-278000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-278000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:41:18.989660   18192 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:41:18.989777   18192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:18.989780   18192 out.go:304] Setting ErrFile to fd 2...
	I0610 04:41:18.989788   18192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:18.989924   18192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:41:18.990980   18192 out.go:298] Setting JSON to false
	I0610 04:41:19.007295   18192 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9649,"bootTime":1718010029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:41:19.007379   18192 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:41:19.013249   18192 out.go:177] * [old-k8s-version-278000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:41:19.020340   18192 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:41:19.023305   18192 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:41:19.020382   18192 notify.go:220] Checking for updates...
	I0610 04:41:19.030261   18192 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:41:19.034291   18192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:41:19.037254   18192 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:41:19.040341   18192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:41:19.043663   18192 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:19.043730   18192 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:19.043783   18192 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:41:19.047264   18192 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:41:19.054280   18192 start.go:297] selected driver: qemu2
	I0610 04:41:19.054285   18192 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:41:19.054292   18192 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:41:19.056579   18192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:41:19.060259   18192 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:41:19.064379   18192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:41:19.064400   18192 cni.go:84] Creating CNI manager for ""
	I0610 04:41:19.064416   18192 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 04:41:19.064463   18192 start.go:340] cluster config:
	{Name:old-k8s-version-278000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:41:19.069077   18192 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:19.077293   18192 out.go:177] * Starting "old-k8s-version-278000" primary control-plane node in "old-k8s-version-278000" cluster
	I0610 04:41:19.081290   18192 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:41:19.081305   18192 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 04:41:19.081317   18192 cache.go:56] Caching tarball of preloaded images
	I0610 04:41:19.081386   18192 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:41:19.081392   18192 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 04:41:19.081470   18192 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/old-k8s-version-278000/config.json ...
	I0610 04:41:19.081481   18192 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/old-k8s-version-278000/config.json: {Name:mk511a8cee2a29c1bbb4f741d04655cad65c7712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:41:19.081707   18192 start.go:360] acquireMachinesLock for old-k8s-version-278000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:19.081749   18192 start.go:364] duration metric: took 34.25µs to acquireMachinesLock for "old-k8s-version-278000"
	I0610 04:41:19.081761   18192 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:41:19.081793   18192 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:41:19.088208   18192 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:41:19.106649   18192 start.go:159] libmachine.API.Create for "old-k8s-version-278000" (driver="qemu2")
	I0610 04:41:19.106679   18192 client.go:168] LocalClient.Create starting
	I0610 04:41:19.106742   18192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:41:19.106775   18192 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:19.106789   18192 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:19.106835   18192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:41:19.106868   18192 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:19.106875   18192 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:19.107248   18192 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:41:19.345472   18192 main.go:141] libmachine: Creating SSH key...
	I0610 04:41:19.416998   18192 main.go:141] libmachine: Creating Disk image...
	I0610 04:41:19.417004   18192 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:41:19.417186   18192 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2
	I0610 04:41:19.429612   18192 main.go:141] libmachine: STDOUT: 
	I0610 04:41:19.429631   18192 main.go:141] libmachine: STDERR: 
	I0610 04:41:19.429685   18192 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2 +20000M
	I0610 04:41:19.440562   18192 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:41:19.440576   18192 main.go:141] libmachine: STDERR: 
	I0610 04:41:19.440590   18192 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2
	I0610 04:41:19.440603   18192 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:41:19.440634   18192 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e5:af:4a:c1:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2
	I0610 04:41:19.442224   18192 main.go:141] libmachine: STDOUT: 
	I0610 04:41:19.442239   18192 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:19.442259   18192 client.go:171] duration metric: took 335.70075ms to LocalClient.Create
	I0610 04:41:21.443758   18192 start.go:128] duration metric: took 2.362773042s to createHost
	I0610 04:41:21.443908   18192 start.go:83] releasing machines lock for "old-k8s-version-278000", held for 2.362885958s
	W0610 04:41:21.443985   18192 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:21.458269   18192 out.go:177] * Deleting "old-k8s-version-278000" in qemu2 ...
	W0610 04:41:21.488485   18192 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:21.488523   18192 start.go:728] Will try again in 5 seconds ...
	I0610 04:41:26.489355   18192 start.go:360] acquireMachinesLock for old-k8s-version-278000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:26.489906   18192 start.go:364] duration metric: took 385µs to acquireMachinesLock for "old-k8s-version-278000"
	I0610 04:41:26.490028   18192 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:41:26.490354   18192 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:41:26.501837   18192 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:41:26.550547   18192 start.go:159] libmachine.API.Create for "old-k8s-version-278000" (driver="qemu2")
	I0610 04:41:26.550602   18192 client.go:168] LocalClient.Create starting
	I0610 04:41:26.550716   18192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:41:26.550785   18192 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:26.550815   18192 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:26.550874   18192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:41:26.550920   18192 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:26.550933   18192 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:26.551551   18192 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:41:26.709679   18192 main.go:141] libmachine: Creating SSH key...
	I0610 04:41:26.795540   18192 main.go:141] libmachine: Creating Disk image...
	I0610 04:41:26.795545   18192 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:41:26.795731   18192 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2
	I0610 04:41:26.808072   18192 main.go:141] libmachine: STDOUT: 
	I0610 04:41:26.808102   18192 main.go:141] libmachine: STDERR: 
	I0610 04:41:26.808179   18192 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2 +20000M
	I0610 04:41:26.819112   18192 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:41:26.819126   18192 main.go:141] libmachine: STDERR: 
	I0610 04:41:26.819139   18192 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2
	I0610 04:41:26.819143   18192 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:41:26.819183   18192 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:2a:4e:f1:ce:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2
	I0610 04:41:26.820859   18192 main.go:141] libmachine: STDOUT: 
	I0610 04:41:26.820873   18192 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:26.820890   18192 client.go:171] duration metric: took 270.345583ms to LocalClient.Create
	I0610 04:41:28.822632   18192 start.go:128] duration metric: took 2.332754834s to createHost
	I0610 04:41:28.822688   18192 start.go:83] releasing machines lock for "old-k8s-version-278000", held for 2.333273667s
	W0610 04:41:28.823060   18192 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-278000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-278000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:28.837754   18192 out.go:177] 
	W0610 04:41:28.841797   18192 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:41:28.841823   18192 out.go:239] * 
	* 
	W0610 04:41:28.844464   18192 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:41:28.852702   18192 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-278000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (68.988625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-278000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-278000 create -f testdata/busybox.yaml: exit status 1 (30.838209ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-278000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-278000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (30.471791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (29.16825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-278000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-278000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-278000 describe deploy/metrics-server -n kube-system: exit status 1 (26.654417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-278000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-278000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (30.651917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-278000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-278000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.191632792s)

                                                
                                                
-- stdout --
	* [old-k8s-version-278000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-278000" primary control-plane node in "old-k8s-version-278000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-278000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-278000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:41:32.724214   18242 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:41:32.724345   18242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:32.724348   18242 out.go:304] Setting ErrFile to fd 2...
	I0610 04:41:32.724351   18242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:32.724484   18242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:41:32.725467   18242 out.go:298] Setting JSON to false
	I0610 04:41:32.741761   18242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9663,"bootTime":1718010029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:41:32.741830   18242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:41:32.747186   18242 out.go:177] * [old-k8s-version-278000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:41:32.754193   18242 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:41:32.757138   18242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:41:32.754246   18242 notify.go:220] Checking for updates...
	I0610 04:41:32.764057   18242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:41:32.767172   18242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:41:32.770138   18242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:41:32.773119   18242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:41:32.776420   18242 config.go:182] Loaded profile config "old-k8s-version-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0610 04:41:32.780164   18242 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 04:41:32.781584   18242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:41:32.786132   18242 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:41:32.793026   18242 start.go:297] selected driver: qemu2
	I0610 04:41:32.793032   18242 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:41:32.793098   18242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:41:32.795310   18242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:41:32.795336   18242 cni.go:84] Creating CNI manager for ""
	I0610 04:41:32.795343   18242 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 04:41:32.795368   18242 start.go:340] cluster config:
	{Name:old-k8s-version-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:41:32.799707   18242 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:32.808166   18242 out.go:177] * Starting "old-k8s-version-278000" primary control-plane node in "old-k8s-version-278000" cluster
	I0610 04:41:32.812167   18242 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:41:32.812182   18242 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 04:41:32.812196   18242 cache.go:56] Caching tarball of preloaded images
	I0610 04:41:32.812258   18242 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:41:32.812263   18242 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 04:41:32.812327   18242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/old-k8s-version-278000/config.json ...
	I0610 04:41:32.812843   18242 start.go:360] acquireMachinesLock for old-k8s-version-278000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:32.812871   18242 start.go:364] duration metric: took 21.916µs to acquireMachinesLock for "old-k8s-version-278000"
	I0610 04:41:32.812879   18242 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:41:32.812885   18242 fix.go:54] fixHost starting: 
	I0610 04:41:32.813006   18242 fix.go:112] recreateIfNeeded on old-k8s-version-278000: state=Stopped err=<nil>
	W0610 04:41:32.813019   18242 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:41:32.816163   18242 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-278000" ...
	I0610 04:41:32.823140   18242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:2a:4e:f1:ce:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2
	I0610 04:41:32.825166   18242 main.go:141] libmachine: STDOUT: 
	I0610 04:41:32.825186   18242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:32.825213   18242 fix.go:56] duration metric: took 12.328041ms for fixHost
	I0610 04:41:32.825224   18242 start.go:83] releasing machines lock for "old-k8s-version-278000", held for 12.343875ms
	W0610 04:41:32.825232   18242 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:41:32.825261   18242 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:32.825266   18242 start.go:728] Will try again in 5 seconds ...
	I0610 04:41:37.826827   18242 start.go:360] acquireMachinesLock for old-k8s-version-278000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:37.827295   18242 start.go:364] duration metric: took 352.959µs to acquireMachinesLock for "old-k8s-version-278000"
	I0610 04:41:37.827459   18242 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:41:37.827478   18242 fix.go:54] fixHost starting: 
	I0610 04:41:37.828214   18242 fix.go:112] recreateIfNeeded on old-k8s-version-278000: state=Stopped err=<nil>
	W0610 04:41:37.828242   18242 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:41:37.836719   18242 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-278000" ...
	I0610 04:41:37.840976   18242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:2a:4e:f1:ce:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/old-k8s-version-278000/disk.qcow2
	I0610 04:41:37.850556   18242 main.go:141] libmachine: STDOUT: 
	I0610 04:41:37.850629   18242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:37.850752   18242 fix.go:56] duration metric: took 23.276333ms for fixHost
	I0610 04:41:37.850772   18242 start.go:83] releasing machines lock for "old-k8s-version-278000", held for 23.452833ms
	W0610 04:41:37.851013   18242 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-278000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-278000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:37.859587   18242 out.go:177] 
	W0610 04:41:37.862853   18242 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:41:37.862892   18242 out.go:239] * 
	* 
	W0610 04:41:37.865824   18242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:41:37.873704   18242 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-278000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (66.643834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-278000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (33.363292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-278000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-278000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-278000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.761542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-278000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-278000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (29.725125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-278000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (29.768625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-278000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-278000 --alsologtostderr -v=1: exit status 83 (39.867541ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-278000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-278000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:41:38.148129   18264 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:41:38.148494   18264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:38.148497   18264 out.go:304] Setting ErrFile to fd 2...
	I0610 04:41:38.148500   18264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:38.148659   18264 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:41:38.148864   18264 out.go:298] Setting JSON to false
	I0610 04:41:38.148871   18264 mustload.go:65] Loading cluster: old-k8s-version-278000
	I0610 04:41:38.149040   18264 config.go:182] Loaded profile config "old-k8s-version-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0610 04:41:38.153209   18264 out.go:177] * The control-plane node old-k8s-version-278000 host is not running: state=Stopped
	I0610 04:41:38.154372   18264 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-278000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-278000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (30.49125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (29.4785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-278000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-335000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-335000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.829481833s)

                                                
                                                
-- stdout --
	* [no-preload-335000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-335000" primary control-plane node in "no-preload-335000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-335000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:41:38.605426   18287 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:41:38.605555   18287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:38.605558   18287 out.go:304] Setting ErrFile to fd 2...
	I0610 04:41:38.605561   18287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:38.605701   18287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:41:38.606829   18287 out.go:298] Setting JSON to false
	I0610 04:41:38.623902   18287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9669,"bootTime":1718010029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:41:38.623977   18287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:41:38.628769   18287 out.go:177] * [no-preload-335000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:41:38.635506   18287 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:41:38.635539   18287 notify.go:220] Checking for updates...
	I0610 04:41:38.643371   18287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:41:38.646436   18287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:41:38.649484   18287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:41:38.652473   18287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:41:38.655385   18287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:41:38.658811   18287 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:38.658871   18287 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:38.658925   18287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:41:38.663309   18287 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:41:38.670447   18287 start.go:297] selected driver: qemu2
	I0610 04:41:38.670453   18287 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:41:38.670460   18287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:41:38.672650   18287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:41:38.675396   18287 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:41:38.678561   18287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:41:38.678602   18287 cni.go:84] Creating CNI manager for ""
	I0610 04:41:38.678610   18287 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:41:38.678614   18287 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:41:38.678649   18287 start.go:340] cluster config:
	{Name:no-preload-335000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-335000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:41:38.683229   18287 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.690452   18287 out.go:177] * Starting "no-preload-335000" primary control-plane node in "no-preload-335000" cluster
	I0610 04:41:38.694439   18287 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:41:38.694527   18287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/no-preload-335000/config.json ...
	I0610 04:41:38.694542   18287 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/no-preload-335000/config.json: {Name:mkd55d6b998233800ed674f13e4aafbdb63c51f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:41:38.694564   18287 cache.go:107] acquiring lock: {Name:mk2c43a349319889823e75fa1fc400c571cc7a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.694600   18287 cache.go:107] acquiring lock: {Name:mkeadb104076275619926d51b992eb707ab727fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.694571   18287 cache.go:107] acquiring lock: {Name:mk25fef9fe9ad437baada8e253f6d7ce04ca07a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.694611   18287 cache.go:107] acquiring lock: {Name:mke5e9f62d70bf6a898e6a05af258116e49c5e3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.694618   18287 cache.go:107] acquiring lock: {Name:mk1aef0b48c2c89dc5534b2f73266334684a7ffc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.694622   18287 cache.go:107] acquiring lock: {Name:mkd7bfd7d78d36f5788e1aa0f980e998fd3a0fd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.694569   18287 cache.go:107] acquiring lock: {Name:mk7aa0b169a99514757e58f38a147b975f1eb940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.694737   18287 cache.go:107] acquiring lock: {Name:mk31d45ea3dd7d8a4a409fbb6b4d6761726be93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:38.694809   18287 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 04:41:38.694848   18287 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 285.792µs
	I0610 04:41:38.694889   18287 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0610 04:41:38.694903   18287 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 04:41:38.694939   18287 start.go:360] acquireMachinesLock for no-preload-335000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:38.694939   18287 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0610 04:41:38.694954   18287 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0610 04:41:38.694947   18287 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 04:41:38.694980   18287 start.go:364] duration metric: took 34.459µs to acquireMachinesLock for "no-preload-335000"
	I0610 04:41:38.694984   18287 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0610 04:41:38.695027   18287 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0610 04:41:38.694996   18287 start.go:93] Provisioning new machine with config: &{Name:no-preload-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-335000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:41:38.695044   18287 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:41:38.699450   18287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:41:38.695162   18287 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0610 04:41:38.708959   18287 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0610 04:41:38.709442   18287 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 04:41:38.709702   18287 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0610 04:41:38.709719   18287 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0610 04:41:38.709713   18287 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0610 04:41:38.709804   18287 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0610 04:41:38.711483   18287 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0610 04:41:38.717389   18287 start.go:159] libmachine.API.Create for "no-preload-335000" (driver="qemu2")
	I0610 04:41:38.717410   18287 client.go:168] LocalClient.Create starting
	I0610 04:41:38.717475   18287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:41:38.717506   18287 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:38.717517   18287 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:38.717569   18287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:41:38.717593   18287 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:38.717609   18287 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:38.718030   18287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:41:38.867943   18287 main.go:141] libmachine: Creating SSH key...
	I0610 04:41:38.953142   18287 main.go:141] libmachine: Creating Disk image...
	I0610 04:41:38.953173   18287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:41:38.953421   18287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2
	I0610 04:41:38.966697   18287 main.go:141] libmachine: STDOUT: 
	I0610 04:41:38.966715   18287 main.go:141] libmachine: STDERR: 
	I0610 04:41:38.966782   18287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2 +20000M
	I0610 04:41:38.979089   18287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:41:38.979109   18287 main.go:141] libmachine: STDERR: 
	I0610 04:41:38.979125   18287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2
	I0610 04:41:38.979131   18287 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:41:38.979171   18287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:94:c6:2e:d6:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2
	I0610 04:41:38.981322   18287 main.go:141] libmachine: STDOUT: 
	I0610 04:41:38.981341   18287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:38.981360   18287 client.go:171] duration metric: took 263.974708ms to LocalClient.Create
	I0610 04:41:39.564171   18287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0610 04:41:39.592057   18287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0610 04:41:39.604769   18287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0610 04:41:39.632056   18287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0610 04:41:39.721394   18287 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0610 04:41:39.721470   18287 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.026991084s
	I0610 04:41:39.721500   18287 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0610 04:41:39.727498   18287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1
	I0610 04:41:39.757487   18287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1
	I0610 04:41:39.762397   18287 cache.go:162] opening:  /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1
	I0610 04:41:40.981458   18287 start.go:128] duration metric: took 2.2866175s to createHost
	I0610 04:41:40.981516   18287 start.go:83] releasing machines lock for "no-preload-335000", held for 2.286759292s
	W0610 04:41:40.981581   18287 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:40.995931   18287 out.go:177] * Deleting "no-preload-335000" in qemu2 ...
	W0610 04:41:41.024710   18287 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:41.024743   18287 start.go:728] Will try again in 5 seconds ...
	I0610 04:41:42.041378   18287 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0610 04:41:42.041429   18287 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.347141166s
	I0610 04:41:42.041455   18287 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0610 04:41:42.910345   18287 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0610 04:41:42.910425   18287 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 4.216201125s
	I0610 04:41:42.910454   18287 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0610 04:41:43.165282   18287 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0610 04:41:43.165349   18287 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 4.471150458s
	I0610 04:41:43.165402   18287 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0610 04:41:43.540942   18287 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0610 04:41:43.540988   18287 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 4.846881792s
	I0610 04:41:43.541013   18287 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0610 04:41:44.207263   18287 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0610 04:41:44.207308   18287 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 5.513255916s
	I0610 04:41:44.207331   18287 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0610 04:41:46.025588   18287 start.go:360] acquireMachinesLock for no-preload-335000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:46.025980   18287 start.go:364] duration metric: took 314.5µs to acquireMachinesLock for "no-preload-335000"
	I0610 04:41:46.026109   18287 start.go:93] Provisioning new machine with config: &{Name:no-preload-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-335000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:41:46.026366   18287 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:41:46.037049   18287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:41:46.087258   18287 start.go:159] libmachine.API.Create for "no-preload-335000" (driver="qemu2")
	I0610 04:41:46.087345   18287 client.go:168] LocalClient.Create starting
	I0610 04:41:46.087460   18287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:41:46.087528   18287 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:46.087548   18287 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:46.087608   18287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:41:46.087651   18287 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:46.087665   18287 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:46.088181   18287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:41:46.243113   18287 main.go:141] libmachine: Creating SSH key...
	I0610 04:41:46.328447   18287 main.go:141] libmachine: Creating Disk image...
	I0610 04:41:46.328458   18287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:41:46.328629   18287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2
	I0610 04:41:46.341383   18287 main.go:141] libmachine: STDOUT: 
	I0610 04:41:46.341403   18287 main.go:141] libmachine: STDERR: 
	I0610 04:41:46.341457   18287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2 +20000M
	I0610 04:41:46.352703   18287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:41:46.352716   18287 main.go:141] libmachine: STDERR: 
	I0610 04:41:46.352726   18287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2
	I0610 04:41:46.352732   18287 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:41:46.352775   18287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:7e:c6:96:f2:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2
	I0610 04:41:46.354545   18287 main.go:141] libmachine: STDOUT: 
	I0610 04:41:46.354559   18287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:46.354571   18287 client.go:171] duration metric: took 267.237792ms to LocalClient.Create
	I0610 04:41:46.846580   18287 cache.go:157] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0610 04:41:46.846695   18287 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 8.152787167s
	I0610 04:41:46.846739   18287 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0610 04:41:46.846791   18287 cache.go:87] Successfully saved all images to host disk.
	I0610 04:41:48.356647   18287 start.go:128] duration metric: took 2.330405917s to createHost
	I0610 04:41:48.356732   18287 start.go:83] releasing machines lock for "no-preload-335000", held for 2.330841792s
	W0610 04:41:48.357166   18287 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-335000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-335000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:48.371549   18287 out.go:177] 
	W0610 04:41:48.375849   18287 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:41:48.375900   18287 out.go:239] * 
	* 
	W0610 04:41:48.378446   18287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:41:48.390795   18287 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-335000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (67.727292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-335000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-335000 create -f testdata/busybox.yaml: exit status 1 (29.430792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-335000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-335000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (30.582167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (30.1595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-335000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-335000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-335000 describe deploy/metrics-server -n kube-system: exit status 1 (26.794375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-335000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-335000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (30.609458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-335000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-335000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.177964s)

                                                
                                                
-- stdout --
	* [no-preload-335000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-335000" primary control-plane node in "no-preload-335000" cluster
	* Restarting existing qemu2 VM for "no-preload-335000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-335000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:41:51.953245   18363 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:41:51.953382   18363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:51.953386   18363 out.go:304] Setting ErrFile to fd 2...
	I0610 04:41:51.953391   18363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:51.953525   18363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:41:51.954482   18363 out.go:298] Setting JSON to false
	I0610 04:41:51.970715   18363 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9682,"bootTime":1718010029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:41:51.970773   18363 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:41:51.974372   18363 out.go:177] * [no-preload-335000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:41:51.981369   18363 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:41:51.981422   18363 notify.go:220] Checking for updates...
	I0610 04:41:51.988268   18363 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:41:51.991504   18363 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:41:51.994387   18363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:41:51.997318   18363 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:41:52.000307   18363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:41:52.003581   18363 config.go:182] Loaded profile config "no-preload-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:52.003850   18363 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:41:52.008352   18363 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:41:52.014305   18363 start.go:297] selected driver: qemu2
	I0610 04:41:52.014311   18363 start.go:901] validating driver "qemu2" against &{Name:no-preload-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:no-preload-335000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:41:52.014374   18363 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:41:52.016527   18363 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:41:52.016563   18363 cni.go:84] Creating CNI manager for ""
	I0610 04:41:52.016570   18363 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:41:52.016598   18363 start.go:340] cluster config:
	{Name:no-preload-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-335000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:41:52.020926   18363 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.028334   18363 out.go:177] * Starting "no-preload-335000" primary control-plane node in "no-preload-335000" cluster
	I0610 04:41:52.032320   18363 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:41:52.032413   18363 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/no-preload-335000/config.json ...
	I0610 04:41:52.032435   18363 cache.go:107] acquiring lock: {Name:mk2c43a349319889823e75fa1fc400c571cc7a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.032444   18363 cache.go:107] acquiring lock: {Name:mk1aef0b48c2c89dc5534b2f73266334684a7ffc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.032470   18363 cache.go:107] acquiring lock: {Name:mk25fef9fe9ad437baada8e253f6d7ce04ca07a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.032522   18363 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 04:41:52.032528   18363 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0610 04:41:52.032528   18363 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.667µs
	I0610 04:41:52.032536   18363 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 04:41:52.032535   18363 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 101.958µs
	I0610 04:41:52.032547   18363 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0610 04:41:52.032549   18363 cache.go:107] acquiring lock: {Name:mk31d45ea3dd7d8a4a409fbb6b4d6761726be93a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.032560   18363 cache.go:107] acquiring lock: {Name:mkeadb104076275619926d51b992eb707ab727fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.032591   18363 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0610 04:41:52.032596   18363 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 47.542µs
	I0610 04:41:52.032600   18363 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0610 04:41:52.032605   18363 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0610 04:41:52.032607   18363 cache.go:107] acquiring lock: {Name:mkd7bfd7d78d36f5788e1aa0f980e998fd3a0fd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.032603   18363 cache.go:107] acquiring lock: {Name:mk7aa0b169a99514757e58f38a147b975f1eb940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.032610   18363 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 50.833µs
	I0610 04:41:52.032633   18363 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0610 04:41:52.032583   18363 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0610 04:41:52.032647   18363 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 218.5µs
	I0610 04:41:52.032654   18363 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0610 04:41:52.032658   18363 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0610 04:41:52.032656   18363 cache.go:107] acquiring lock: {Name:mke5e9f62d70bf6a898e6a05af258116e49c5e3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:52.032662   18363 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 55.667µs
	I0610 04:41:52.032667   18363 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0610 04:41:52.032670   18363 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0610 04:41:52.032676   18363 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 93.666µs
	I0610 04:41:52.032684   18363 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0610 04:41:52.032722   18363 cache.go:115] /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0610 04:41:52.032728   18363 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 89.209µs
	I0610 04:41:52.032735   18363 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0610 04:41:52.032740   18363 cache.go:87] Successfully saved all images to host disk.
	I0610 04:41:52.032882   18363 start.go:360] acquireMachinesLock for no-preload-335000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:52.032914   18363 start.go:364] duration metric: took 24.583µs to acquireMachinesLock for "no-preload-335000"
	I0610 04:41:52.032923   18363 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:41:52.032928   18363 fix.go:54] fixHost starting: 
	I0610 04:41:52.033056   18363 fix.go:112] recreateIfNeeded on no-preload-335000: state=Stopped err=<nil>
	W0610 04:41:52.033069   18363 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:41:52.041352   18363 out.go:177] * Restarting existing qemu2 VM for "no-preload-335000" ...
	I0610 04:41:52.045384   18363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:7e:c6:96:f2:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2
	I0610 04:41:52.047512   18363 main.go:141] libmachine: STDOUT: 
	I0610 04:41:52.047530   18363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:52.047559   18363 fix.go:56] duration metric: took 14.6305ms for fixHost
	I0610 04:41:52.047563   18363 start.go:83] releasing machines lock for "no-preload-335000", held for 14.646083ms
	W0610 04:41:52.047572   18363 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:41:52.047607   18363 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:52.047612   18363 start.go:728] Will try again in 5 seconds ...
	I0610 04:41:57.048522   18363 start.go:360] acquireMachinesLock for no-preload-335000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:57.048821   18363 start.go:364] duration metric: took 220.459µs to acquireMachinesLock for "no-preload-335000"
	I0610 04:41:57.048903   18363 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:41:57.048920   18363 fix.go:54] fixHost starting: 
	I0610 04:41:57.049407   18363 fix.go:112] recreateIfNeeded on no-preload-335000: state=Stopped err=<nil>
	W0610 04:41:57.049426   18363 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:41:57.056876   18363 out.go:177] * Restarting existing qemu2 VM for "no-preload-335000" ...
	I0610 04:41:57.060059   18363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:7e:c6:96:f2:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/no-preload-335000/disk.qcow2
	I0610 04:41:57.066650   18363 main.go:141] libmachine: STDOUT: 
	I0610 04:41:57.066722   18363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:57.066778   18363 fix.go:56] duration metric: took 17.86425ms for fixHost
	I0610 04:41:57.066793   18363 start.go:83] releasing machines lock for "no-preload-335000", held for 17.959667ms
	W0610 04:41:57.066925   18363 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-335000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-335000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:41:57.075860   18363 out.go:177] 
	W0610 04:41:57.078937   18363 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:41:57.078958   18363 out.go:239] * 
	* 
	W0610 04:41:57.080185   18363 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:41:57.093859   18363 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-335000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (47.40275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-335000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (31.807833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-335000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-335000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-335000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.411166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-335000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-335000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (32.126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-335000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (30.926334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-335000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-335000 --alsologtostderr -v=1: exit status 83 (41.635083ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-335000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-335000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:41:57.345050   18384 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:41:57.345212   18384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:57.345217   18384 out.go:304] Setting ErrFile to fd 2...
	I0610 04:41:57.345219   18384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:57.345349   18384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:41:57.345567   18384 out.go:298] Setting JSON to false
	I0610 04:41:57.345575   18384 mustload.go:65] Loading cluster: no-preload-335000
	I0610 04:41:57.345752   18384 config.go:182] Loaded profile config "no-preload-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:57.349518   18384 out.go:177] * The control-plane node no-preload-335000 host is not running: state=Stopped
	I0610 04:41:57.352492   18384 out.go:177]   To start a cluster, run: "minikube start -p no-preload-335000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-335000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (31.981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (30.577542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.818208875s)

                                                
                                                
-- stdout --
	* [embed-certs-601000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-601000" primary control-plane node in "embed-certs-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:41:57.814237   18407 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:41:57.814556   18407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:57.814561   18407 out.go:304] Setting ErrFile to fd 2...
	I0610 04:41:57.814564   18407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:41:57.814769   18407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:41:57.816223   18407 out.go:298] Setting JSON to false
	I0610 04:41:57.832906   18407 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9688,"bootTime":1718010029,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:41:57.832976   18407 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:41:57.837655   18407 out.go:177] * [embed-certs-601000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:41:57.846579   18407 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:41:57.846635   18407 notify.go:220] Checking for updates...
	I0610 04:41:57.851079   18407 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:41:57.854574   18407 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:41:57.857583   18407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:41:57.860574   18407 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:41:57.863506   18407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:41:57.866913   18407 config.go:182] Loaded profile config "cert-expiration-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:57.866977   18407 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:41:57.867036   18407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:41:57.871567   18407 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:41:57.878602   18407 start.go:297] selected driver: qemu2
	I0610 04:41:57.878609   18407 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:41:57.878615   18407 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:41:57.880892   18407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:41:57.884609   18407 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:41:57.887650   18407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:41:57.887691   18407 cni.go:84] Creating CNI manager for ""
	I0610 04:41:57.887699   18407 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:41:57.887703   18407 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:41:57.887736   18407 start.go:340] cluster config:
	{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:41:57.892447   18407 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:41:57.899540   18407 out.go:177] * Starting "embed-certs-601000" primary control-plane node in "embed-certs-601000" cluster
	I0610 04:41:57.903526   18407 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:41:57.903543   18407 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:41:57.903556   18407 cache.go:56] Caching tarball of preloaded images
	I0610 04:41:57.903619   18407 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:41:57.903625   18407 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:41:57.903699   18407 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/embed-certs-601000/config.json ...
	I0610 04:41:57.903714   18407 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/embed-certs-601000/config.json: {Name:mka871f3da47b51563b16346b2f14042f530a209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:41:57.903962   18407 start.go:360] acquireMachinesLock for embed-certs-601000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:41:57.904000   18407 start.go:364] duration metric: took 29.291µs to acquireMachinesLock for "embed-certs-601000"
	I0610 04:41:57.904013   18407 start.go:93] Provisioning new machine with config: &{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:41:57.904043   18407 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:41:57.911567   18407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:41:57.930113   18407 start.go:159] libmachine.API.Create for "embed-certs-601000" (driver="qemu2")
	I0610 04:41:57.930142   18407 client.go:168] LocalClient.Create starting
	I0610 04:41:57.930209   18407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:41:57.930240   18407 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:57.930256   18407 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:57.930306   18407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:41:57.930330   18407 main.go:141] libmachine: Decoding PEM data...
	I0610 04:41:57.930342   18407 main.go:141] libmachine: Parsing certificate...
	I0610 04:41:57.930700   18407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:41:58.075887   18407 main.go:141] libmachine: Creating SSH key...
	I0610 04:41:58.156886   18407 main.go:141] libmachine: Creating Disk image...
	I0610 04:41:58.156893   18407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:41:58.157070   18407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2
	I0610 04:41:58.169809   18407 main.go:141] libmachine: STDOUT: 
	I0610 04:41:58.169831   18407 main.go:141] libmachine: STDERR: 
	I0610 04:41:58.169915   18407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2 +20000M
	I0610 04:41:58.181382   18407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:41:58.181399   18407 main.go:141] libmachine: STDERR: 
	I0610 04:41:58.181408   18407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2
	I0610 04:41:58.181414   18407 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:41:58.181475   18407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:be:4f:70:70:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2
	I0610 04:41:58.183190   18407 main.go:141] libmachine: STDOUT: 
	I0610 04:41:58.183204   18407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:41:58.183223   18407 client.go:171] duration metric: took 253.082166ms to LocalClient.Create
	I0610 04:42:00.185360   18407 start.go:128] duration metric: took 2.28136525s to createHost
	I0610 04:42:00.185438   18407 start.go:83] releasing machines lock for "embed-certs-601000", held for 2.281500458s
	W0610 04:42:00.185492   18407 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:00.196768   18407 out.go:177] * Deleting "embed-certs-601000" in qemu2 ...
	W0610 04:42:00.227003   18407 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:00.227035   18407 start.go:728] Will try again in 5 seconds ...
	I0610 04:42:05.229171   18407 start.go:360] acquireMachinesLock for embed-certs-601000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:05.229716   18407 start.go:364] duration metric: took 395.25µs to acquireMachinesLock for "embed-certs-601000"
	I0610 04:42:05.229839   18407 start.go:93] Provisioning new machine with config: &{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:42:05.230118   18407 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:42:05.239737   18407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:42:05.290441   18407 start.go:159] libmachine.API.Create for "embed-certs-601000" (driver="qemu2")
	I0610 04:42:05.290487   18407 client.go:168] LocalClient.Create starting
	I0610 04:42:05.290598   18407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:42:05.290677   18407 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:05.290694   18407 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:05.290759   18407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:42:05.290807   18407 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:05.290825   18407 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:05.291340   18407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:42:05.447924   18407 main.go:141] libmachine: Creating SSH key...
	I0610 04:42:05.535106   18407 main.go:141] libmachine: Creating Disk image...
	I0610 04:42:05.535111   18407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:42:05.535286   18407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2
	I0610 04:42:05.547997   18407 main.go:141] libmachine: STDOUT: 
	I0610 04:42:05.548013   18407 main.go:141] libmachine: STDERR: 
	I0610 04:42:05.548092   18407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2 +20000M
	I0610 04:42:05.559002   18407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:42:05.559015   18407 main.go:141] libmachine: STDERR: 
	I0610 04:42:05.559040   18407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2
	I0610 04:42:05.559046   18407 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:42:05.559088   18407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5e:76:35:a1:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2
	I0610 04:42:05.560906   18407 main.go:141] libmachine: STDOUT: 
	I0610 04:42:05.560920   18407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:05.560933   18407 client.go:171] duration metric: took 270.444584ms to LocalClient.Create
	I0610 04:42:07.563214   18407 start.go:128] duration metric: took 2.333109s to createHost
	I0610 04:42:07.563300   18407 start.go:83] releasing machines lock for "embed-certs-601000", held for 2.333607958s
	W0610 04:42:07.563622   18407 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:07.576333   18407 out.go:177] 
	W0610 04:42:07.580460   18407 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:07.580486   18407 out.go:239] * 
	* 
	W0610 04:42:07.583282   18407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:42:07.591335   18407 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (68.416625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-601000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-601000 create -f testdata/busybox.yaml: exit status 1 (29.771125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-601000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (29.43625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (30.079458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-601000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-601000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-601000 describe deploy/metrics-server -n kube-system: exit status 1 (27.092833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-601000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (29.860833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.178788542s)

                                                
                                                
-- stdout --
	* [embed-certs-601000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-601000" primary control-plane node in "embed-certs-601000" cluster
	* Restarting existing qemu2 VM for "embed-certs-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:42:10.049357   18460 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:42:10.049488   18460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:10.049490   18460 out.go:304] Setting ErrFile to fd 2...
	I0610 04:42:10.049496   18460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:10.049645   18460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:42:10.050649   18460 out.go:298] Setting JSON to false
	I0610 04:42:10.066804   18460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9701,"bootTime":1718010029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:42:10.066875   18460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:42:10.071705   18460 out.go:177] * [embed-certs-601000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:42:10.078699   18460 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:42:10.082498   18460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:42:10.078738   18460 notify.go:220] Checking for updates...
	I0610 04:42:10.089600   18460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:42:10.091000   18460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:42:10.094600   18460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:42:10.097663   18460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:42:10.100992   18460 config.go:182] Loaded profile config "embed-certs-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:10.101252   18460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:42:10.104631   18460 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:42:10.111616   18460 start.go:297] selected driver: qemu2
	I0610 04:42:10.111622   18460 start.go:901] validating driver "qemu2" against &{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:embed-certs-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:42:10.111702   18460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:42:10.114009   18460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:42:10.114053   18460 cni.go:84] Creating CNI manager for ""
	I0610 04:42:10.114061   18460 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:42:10.114092   18460 start.go:340] cluster config:
	{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-601000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:42:10.118574   18460 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:42:10.126635   18460 out.go:177] * Starting "embed-certs-601000" primary control-plane node in "embed-certs-601000" cluster
	I0610 04:42:10.130606   18460 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:42:10.130620   18460 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:42:10.130629   18460 cache.go:56] Caching tarball of preloaded images
	I0610 04:42:10.130693   18460 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:42:10.130699   18460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:42:10.130756   18460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/embed-certs-601000/config.json ...
	I0610 04:42:10.131270   18460 start.go:360] acquireMachinesLock for embed-certs-601000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:10.131298   18460 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "embed-certs-601000"
	I0610 04:42:10.131306   18460 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:42:10.131313   18460 fix.go:54] fixHost starting: 
	I0610 04:42:10.131424   18460 fix.go:112] recreateIfNeeded on embed-certs-601000: state=Stopped err=<nil>
	W0610 04:42:10.131437   18460 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:42:10.135666   18460 out.go:177] * Restarting existing qemu2 VM for "embed-certs-601000" ...
	I0610 04:42:10.143656   18460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5e:76:35:a1:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2
	I0610 04:42:10.145639   18460 main.go:141] libmachine: STDOUT: 
	I0610 04:42:10.145659   18460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:10.145687   18460 fix.go:56] duration metric: took 14.372958ms for fixHost
	I0610 04:42:10.145693   18460 start.go:83] releasing machines lock for "embed-certs-601000", held for 14.390666ms
	W0610 04:42:10.145699   18460 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:10.145729   18460 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:10.145734   18460 start.go:728] Will try again in 5 seconds ...
	I0610 04:42:15.147702   18460 start.go:360] acquireMachinesLock for embed-certs-601000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:15.147765   18460 start.go:364] duration metric: took 50.375µs to acquireMachinesLock for "embed-certs-601000"
	I0610 04:42:15.147776   18460 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:42:15.147779   18460 fix.go:54] fixHost starting: 
	I0610 04:42:15.147907   18460 fix.go:112] recreateIfNeeded on embed-certs-601000: state=Stopped err=<nil>
	W0610 04:42:15.147912   18460 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:42:15.162197   18460 out.go:177] * Restarting existing qemu2 VM for "embed-certs-601000" ...
	I0610 04:42:15.165179   18460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5e:76:35:a1:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/embed-certs-601000/disk.qcow2
	I0610 04:42:15.167135   18460 main.go:141] libmachine: STDOUT: 
	I0610 04:42:15.167152   18460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:15.167169   18460 fix.go:56] duration metric: took 19.390459ms for fixHost
	I0610 04:42:15.167175   18460 start.go:83] releasing machines lock for "embed-certs-601000", held for 19.406292ms
	W0610 04:42:15.167202   18460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:15.171198   18460 out.go:177] 
	W0610 04:42:15.178186   18460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:15.178192   18460 out.go:239] * 
	* 
	W0610 04:42:15.178687   18460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:42:15.193133   18460 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (33.251083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-211000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-211000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.9675045s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-211000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-211000" primary control-plane node in "default-k8s-diff-port-211000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-211000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:42:15.124121   18486 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:42:15.124269   18486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:15.124272   18486 out.go:304] Setting ErrFile to fd 2...
	I0610 04:42:15.124277   18486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:15.124403   18486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:42:15.125665   18486 out.go:298] Setting JSON to false
	I0610 04:42:15.141997   18486 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9706,"bootTime":1718010029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:42:15.142080   18486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:42:15.147147   18486 out.go:177] * [default-k8s-diff-port-211000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:42:15.162193   18486 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:42:15.155108   18486 notify.go:220] Checking for updates...
	I0610 04:42:15.168106   18486 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:42:15.178164   18486 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:42:15.193138   18486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:42:15.200201   18486 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:42:15.207076   18486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:42:15.211576   18486 config.go:182] Loaded profile config "embed-certs-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:15.211643   18486 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:15.211697   18486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:42:15.215340   18486 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:42:15.222102   18486 start.go:297] selected driver: qemu2
	I0610 04:42:15.222109   18486 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:42:15.222115   18486 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:42:15.224964   18486 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:42:15.229078   18486 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:42:15.233331   18486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:42:15.233377   18486 cni.go:84] Creating CNI manager for ""
	I0610 04:42:15.233385   18486 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:42:15.233390   18486 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:42:15.233430   18486 start.go:340] cluster config:
	{Name:default-k8s-diff-port-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:42:15.237821   18486 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:42:15.245159   18486 out.go:177] * Starting "default-k8s-diff-port-211000" primary control-plane node in "default-k8s-diff-port-211000" cluster
	I0610 04:42:15.249119   18486 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:42:15.249143   18486 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:42:15.249151   18486 cache.go:56] Caching tarball of preloaded images
	I0610 04:42:15.249232   18486 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:42:15.249238   18486 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:42:15.249300   18486 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/default-k8s-diff-port-211000/config.json ...
	I0610 04:42:15.249310   18486 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/default-k8s-diff-port-211000/config.json: {Name:mk9f9a1a8fddb69dc0abca44f73464df141fdfe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:42:15.249638   18486 start.go:360] acquireMachinesLock for default-k8s-diff-port-211000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:15.249668   18486 start.go:364] duration metric: took 20.875µs to acquireMachinesLock for "default-k8s-diff-port-211000"
	I0610 04:42:15.249678   18486 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:42:15.249725   18486 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:42:15.253343   18486 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:42:15.269348   18486 start.go:159] libmachine.API.Create for "default-k8s-diff-port-211000" (driver="qemu2")
	I0610 04:42:15.269374   18486 client.go:168] LocalClient.Create starting
	I0610 04:42:15.269444   18486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:42:15.269478   18486 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:15.269491   18486 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:15.269545   18486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:42:15.269568   18486 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:15.269574   18486 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:15.269979   18486 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:42:15.429775   18486 main.go:141] libmachine: Creating SSH key...
	I0610 04:42:15.563048   18486 main.go:141] libmachine: Creating Disk image...
	I0610 04:42:15.563058   18486 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:42:15.563273   18486 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2
	I0610 04:42:15.577121   18486 main.go:141] libmachine: STDOUT: 
	I0610 04:42:15.577148   18486 main.go:141] libmachine: STDERR: 
	I0610 04:42:15.577211   18486 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2 +20000M
	I0610 04:42:15.591937   18486 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:42:15.591954   18486 main.go:141] libmachine: STDERR: 
	I0610 04:42:15.591968   18486 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2
	I0610 04:42:15.591972   18486 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:42:15.591997   18486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b9:21:35:8d:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2
	I0610 04:42:15.593769   18486 main.go:141] libmachine: STDOUT: 
	I0610 04:42:15.593785   18486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:15.593810   18486 client.go:171] duration metric: took 324.435417ms to LocalClient.Create
	I0610 04:42:17.596003   18486 start.go:128] duration metric: took 2.346281375s to createHost
	I0610 04:42:17.596100   18486 start.go:83] releasing machines lock for "default-k8s-diff-port-211000", held for 2.34645075s
	W0610 04:42:17.596204   18486 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:17.613226   18486 out.go:177] * Deleting "default-k8s-diff-port-211000" in qemu2 ...
	W0610 04:42:17.637025   18486 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:17.637048   18486 start.go:728] Will try again in 5 seconds ...
	I0610 04:42:22.639246   18486 start.go:360] acquireMachinesLock for default-k8s-diff-port-211000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:22.639822   18486 start.go:364] duration metric: took 431.583µs to acquireMachinesLock for "default-k8s-diff-port-211000"
	I0610 04:42:22.639976   18486 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:42:22.640265   18486 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:42:22.644911   18486 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:42:22.692769   18486 start.go:159] libmachine.API.Create for "default-k8s-diff-port-211000" (driver="qemu2")
	I0610 04:42:22.692816   18486 client.go:168] LocalClient.Create starting
	I0610 04:42:22.692917   18486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:42:22.692984   18486 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:22.693001   18486 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:22.693059   18486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:42:22.693102   18486 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:22.693115   18486 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:22.693670   18486 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:42:22.849987   18486 main.go:141] libmachine: Creating SSH key...
	I0610 04:42:22.994554   18486 main.go:141] libmachine: Creating Disk image...
	I0610 04:42:22.994561   18486 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:42:22.994750   18486 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2
	I0610 04:42:23.007600   18486 main.go:141] libmachine: STDOUT: 
	I0610 04:42:23.007621   18486 main.go:141] libmachine: STDERR: 
	I0610 04:42:23.007672   18486 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2 +20000M
	I0610 04:42:23.018538   18486 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:42:23.018554   18486 main.go:141] libmachine: STDERR: 
	I0610 04:42:23.018564   18486 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2
	I0610 04:42:23.018577   18486 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:42:23.018602   18486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a8:9d:cd:a2:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2
	I0610 04:42:23.020300   18486 main.go:141] libmachine: STDOUT: 
	I0610 04:42:23.020315   18486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:23.020329   18486 client.go:171] duration metric: took 327.509333ms to LocalClient.Create
	I0610 04:42:25.022487   18486 start.go:128] duration metric: took 2.382219042s to createHost
	I0610 04:42:25.022549   18486 start.go:83] releasing machines lock for "default-k8s-diff-port-211000", held for 2.382720708s
	W0610 04:42:25.022955   18486 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-211000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-211000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:25.031502   18486 out.go:177] 
	W0610 04:42:25.037613   18486 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:25.037638   18486 out.go:239] * 
	* 
	W0610 04:42:25.040161   18486 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:42:25.047414   18486 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-211000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (67.125208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-601000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (27.663292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-601000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.872875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (33.999875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-601000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (33.704417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-601000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-601000 --alsologtostderr -v=1: exit status 83 (44.817667ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-601000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-601000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:42:15.445016   18505 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:42:15.445163   18505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:15.445167   18505 out.go:304] Setting ErrFile to fd 2...
	I0610 04:42:15.445169   18505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:15.445288   18505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:42:15.445515   18505 out.go:298] Setting JSON to false
	I0610 04:42:15.445522   18505 mustload.go:65] Loading cluster: embed-certs-601000
	I0610 04:42:15.445726   18505 config.go:182] Loaded profile config "embed-certs-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:15.450231   18505 out.go:177] * The control-plane node embed-certs-601000 host is not running: state=Stopped
	I0610 04:42:15.454208   18505 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-601000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-601000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (28.648125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (29.511333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (11.681318083s)

                                                
                                                
-- stdout --
	* [newest-cni-332000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-332000" primary control-plane node in "newest-cni-332000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-332000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:42:15.911894   18531 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:42:15.912221   18531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:15.912227   18531 out.go:304] Setting ErrFile to fd 2...
	I0610 04:42:15.912229   18531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:15.912427   18531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:42:15.913844   18531 out.go:298] Setting JSON to false
	I0610 04:42:15.930402   18531 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9706,"bootTime":1718010029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:42:15.930469   18531 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:42:15.935465   18531 out.go:177] * [newest-cni-332000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:42:15.942425   18531 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:42:15.942479   18531 notify.go:220] Checking for updates...
	I0610 04:42:15.950333   18531 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:42:15.953351   18531 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:42:15.956442   18531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:42:15.959440   18531 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:42:15.962368   18531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:42:15.965825   18531 config.go:182] Loaded profile config "default-k8s-diff-port-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:15.965890   18531 config.go:182] Loaded profile config "multinode-766000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:15.965944   18531 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:42:15.973370   18531 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 04:42:15.980392   18531 start.go:297] selected driver: qemu2
	I0610 04:42:15.980399   18531 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:42:15.980405   18531 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:42:15.982810   18531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0610 04:42:15.982837   18531 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0610 04:42:15.991409   18531 out.go:177] * Automatically selected the socket_vmnet network
	I0610 04:42:15.994497   18531 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 04:42:15.994558   18531 cni.go:84] Creating CNI manager for ""
	I0610 04:42:15.994568   18531 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:42:15.994576   18531 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:42:15.994613   18531 start.go:340] cluster config:
	{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:42:15.999566   18531 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:42:16.008377   18531 out.go:177] * Starting "newest-cni-332000" primary control-plane node in "newest-cni-332000" cluster
	I0610 04:42:16.012195   18531 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:42:16.012217   18531 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:42:16.012225   18531 cache.go:56] Caching tarball of preloaded images
	I0610 04:42:16.012308   18531 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:42:16.012314   18531 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:42:16.012399   18531 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/newest-cni-332000/config.json ...
	I0610 04:42:16.012410   18531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/newest-cni-332000/config.json: {Name:mk0b49cb9af4dd63382938bedf33e4f5a31d4322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:42:16.012823   18531 start.go:360] acquireMachinesLock for newest-cni-332000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:17.596292   18531 start.go:364] duration metric: took 1.583418042s to acquireMachinesLock for "newest-cni-332000"
	I0610 04:42:17.596419   18531 start.go:93] Provisioning new machine with config: &{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:42:17.596688   18531 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:42:17.606267   18531 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:42:17.655835   18531 start.go:159] libmachine.API.Create for "newest-cni-332000" (driver="qemu2")
	I0610 04:42:17.655878   18531 client.go:168] LocalClient.Create starting
	I0610 04:42:17.656022   18531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:42:17.656090   18531 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:17.656112   18531 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:17.656179   18531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:42:17.656223   18531 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:17.656234   18531 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:17.656993   18531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:42:17.812861   18531 main.go:141] libmachine: Creating SSH key...
	I0610 04:42:18.062259   18531 main.go:141] libmachine: Creating Disk image...
	I0610 04:42:18.062268   18531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:42:18.062520   18531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2
	I0610 04:42:18.075973   18531 main.go:141] libmachine: STDOUT: 
	I0610 04:42:18.075991   18531 main.go:141] libmachine: STDERR: 
	I0610 04:42:18.076043   18531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2 +20000M
	I0610 04:42:18.087092   18531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:42:18.087105   18531 main.go:141] libmachine: STDERR: 
	I0610 04:42:18.087118   18531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2
	I0610 04:42:18.087125   18531 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:42:18.087164   18531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:8d:b5:8c:19:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2
	I0610 04:42:18.088811   18531 main.go:141] libmachine: STDOUT: 
	I0610 04:42:18.088824   18531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:18.088847   18531 client.go:171] duration metric: took 432.965041ms to LocalClient.Create
	I0610 04:42:20.091007   18531 start.go:128] duration metric: took 2.494317916s to createHost
	I0610 04:42:20.091108   18531 start.go:83] releasing machines lock for "newest-cni-332000", held for 2.49476825s
	W0610 04:42:20.091154   18531 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:20.110308   18531 out.go:177] * Deleting "newest-cni-332000" in qemu2 ...
	W0610 04:42:20.143156   18531 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:20.143209   18531 start.go:728] Will try again in 5 seconds ...
	I0610 04:42:25.145240   18531 start.go:360] acquireMachinesLock for newest-cni-332000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:25.145321   18531 start.go:364] duration metric: took 63.125µs to acquireMachinesLock for "newest-cni-332000"
	I0610 04:42:25.145347   18531 start.go:93] Provisioning new machine with config: &{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 04:42:25.145399   18531 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 04:42:25.153966   18531 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 04:42:25.169727   18531 start.go:159] libmachine.API.Create for "newest-cni-332000" (driver="qemu2")
	I0610 04:42:25.169760   18531 client.go:168] LocalClient.Create starting
	I0610 04:42:25.169813   18531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/ca.pem
	I0610 04:42:25.169839   18531 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:25.169848   18531 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:25.169887   18531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19052-14289/.minikube/certs/cert.pem
	I0610 04:42:25.169903   18531 main.go:141] libmachine: Decoding PEM data...
	I0610 04:42:25.169907   18531 main.go:141] libmachine: Parsing certificate...
	I0610 04:42:25.170178   18531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 04:42:25.348815   18531 main.go:141] libmachine: Creating SSH key...
	I0610 04:42:25.489396   18531 main.go:141] libmachine: Creating Disk image...
	I0610 04:42:25.489404   18531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 04:42:25.489607   18531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2
	I0610 04:42:25.501874   18531 main.go:141] libmachine: STDOUT: 
	I0610 04:42:25.501895   18531 main.go:141] libmachine: STDERR: 
	I0610 04:42:25.501950   18531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2 +20000M
	I0610 04:42:25.513085   18531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 04:42:25.513099   18531 main.go:141] libmachine: STDERR: 
	I0610 04:42:25.513115   18531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2
	I0610 04:42:25.513118   18531 main.go:141] libmachine: Starting QEMU VM...
	I0610 04:42:25.513155   18531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:08:ef:b0:7d:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2
	I0610 04:42:25.514854   18531 main.go:141] libmachine: STDOUT: 
	I0610 04:42:25.514866   18531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:25.514879   18531 client.go:171] duration metric: took 345.118584ms to LocalClient.Create
	I0610 04:42:27.516994   18531 start.go:128] duration metric: took 2.371595875s to createHost
	I0610 04:42:27.517027   18531 start.go:83] releasing machines lock for "newest-cni-332000", held for 2.371719083s
	W0610 04:42:27.517124   18531 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:27.523362   18531 out.go:177] 
	W0610 04:42:27.534399   18531 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:27.534408   18531 out.go:239] * 
	* 
	W0610 04:42:27.534984   18531 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:42:27.549275   18531 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (30.024084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-211000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211000 create -f testdata/busybox.yaml: exit status 1 (30.424166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-211000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-211000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (33.587042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (33.350666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-211000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-211000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211000 describe deploy/metrics-server -n kube-system: exit status 1 (30.216667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-211000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-211000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (30.670833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-211000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-211000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.204092666s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-211000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-211000" primary control-plane node in "default-k8s-diff-port-211000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-211000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-211000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:42:27.468711   18577 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:42:27.468826   18577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:27.468829   18577 out.go:304] Setting ErrFile to fd 2...
	I0610 04:42:27.468832   18577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:27.468988   18577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:42:27.469985   18577 out.go:298] Setting JSON to false
	I0610 04:42:27.486534   18577 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9718,"bootTime":1718010029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:42:27.486615   18577 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:42:27.491413   18577 out.go:177] * [default-k8s-diff-port-211000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:42:27.498405   18577 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:42:27.498461   18577 notify.go:220] Checking for updates...
	I0610 04:42:27.504357   18577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:42:27.507381   18577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:42:27.510432   18577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:42:27.511756   18577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:42:27.514436   18577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:42:27.517640   18577 config.go:182] Loaded profile config "default-k8s-diff-port-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:27.517900   18577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:42:27.531374   18577 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:42:27.537450   18577 start.go:297] selected driver: qemu2
	I0610 04:42:27.537457   18577 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:42:27.537524   18577 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:42:27.539925   18577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 04:42:27.539966   18577 cni.go:84] Creating CNI manager for ""
	I0610 04:42:27.539974   18577 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:42:27.539998   18577 start.go:340] cluster config:
	{Name:default-k8s-diff-port-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-211000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:42:27.544420   18577 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:42:27.556332   18577 out.go:177] * Starting "default-k8s-diff-port-211000" primary control-plane node in "default-k8s-diff-port-211000" cluster
	I0610 04:42:27.566457   18577 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:42:27.566480   18577 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:42:27.566488   18577 cache.go:56] Caching tarball of preloaded images
	I0610 04:42:27.566547   18577 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:42:27.566553   18577 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:42:27.566614   18577 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/default-k8s-diff-port-211000/config.json ...
	I0610 04:42:27.567060   18577 start.go:360] acquireMachinesLock for default-k8s-diff-port-211000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:27.567092   18577 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "default-k8s-diff-port-211000"
	I0610 04:42:27.567101   18577 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:42:27.567109   18577 fix.go:54] fixHost starting: 
	I0610 04:42:27.567250   18577 fix.go:112] recreateIfNeeded on default-k8s-diff-port-211000: state=Stopped err=<nil>
	W0610 04:42:27.567259   18577 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:42:27.570400   18577 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-211000" ...
	I0610 04:42:27.578380   18577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a8:9d:cd:a2:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2
	I0610 04:42:27.580898   18577 main.go:141] libmachine: STDOUT: 
	I0610 04:42:27.580919   18577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:27.580956   18577 fix.go:56] duration metric: took 13.846292ms for fixHost
	I0610 04:42:27.580963   18577 start.go:83] releasing machines lock for "default-k8s-diff-port-211000", held for 13.865666ms
	W0610 04:42:27.580972   18577 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:27.581012   18577 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:27.581018   18577 start.go:728] Will try again in 5 seconds ...
	I0610 04:42:32.583226   18577 start.go:360] acquireMachinesLock for default-k8s-diff-port-211000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:32.583735   18577 start.go:364] duration metric: took 370.291µs to acquireMachinesLock for "default-k8s-diff-port-211000"
	I0610 04:42:32.583865   18577 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:42:32.583890   18577 fix.go:54] fixHost starting: 
	I0610 04:42:32.584649   18577 fix.go:112] recreateIfNeeded on default-k8s-diff-port-211000: state=Stopped err=<nil>
	W0610 04:42:32.584684   18577 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:42:32.593919   18577 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-211000" ...
	I0610 04:42:32.598247   18577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a8:9d:cd:a2:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/default-k8s-diff-port-211000/disk.qcow2
	I0610 04:42:32.607942   18577 main.go:141] libmachine: STDOUT: 
	I0610 04:42:32.608014   18577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:32.608098   18577 fix.go:56] duration metric: took 24.207083ms for fixHost
	I0610 04:42:32.608119   18577 start.go:83] releasing machines lock for "default-k8s-diff-port-211000", held for 24.359041ms
	W0610 04:42:32.608317   18577 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-211000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-211000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:32.615989   18577 out.go:177] 
	W0610 04:42:32.619176   18577 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:32.619206   18577 out.go:239] * 
	* 
	W0610 04:42:32.622598   18577 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:42:32.630064   18577 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-211000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (68.278334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.183234875s)

                                                
                                                
-- stdout --
	* [newest-cni-332000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-332000" primary control-plane node in "newest-cni-332000" cluster
	* Restarting existing qemu2 VM for "newest-cni-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:42:30.854305   18610 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:42:30.854438   18610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:30.854445   18610 out.go:304] Setting ErrFile to fd 2...
	I0610 04:42:30.854455   18610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:30.854591   18610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:42:30.855555   18610 out.go:298] Setting JSON to false
	I0610 04:42:30.871728   18610 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9721,"bootTime":1718010029,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:42:30.871793   18610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:42:30.876759   18610 out.go:177] * [newest-cni-332000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:42:30.883727   18610 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:42:30.883783   18610 notify.go:220] Checking for updates...
	I0610 04:42:30.887707   18610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:42:30.891728   18610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:42:30.894668   18610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:42:30.897737   18610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:42:30.900705   18610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:42:30.903950   18610 config.go:182] Loaded profile config "newest-cni-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:30.904200   18610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:42:30.908695   18610 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:42:30.915652   18610 start.go:297] selected driver: qemu2
	I0610 04:42:30.915663   18610 start.go:901] validating driver "qemu2" against &{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:newest-cni-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:42:30.915716   18610 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:42:30.917998   18610 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 04:42:30.918037   18610 cni.go:84] Creating CNI manager for ""
	I0610 04:42:30.918043   18610 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:42:30.918067   18610 start.go:340] cluster config:
	{Name:newest-cni-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-332000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:42:30.922511   18610 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:42:30.929591   18610 out.go:177] * Starting "newest-cni-332000" primary control-plane node in "newest-cni-332000" cluster
	I0610 04:42:30.933752   18610 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:42:30.933767   18610 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:42:30.933778   18610 cache.go:56] Caching tarball of preloaded images
	I0610 04:42:30.933832   18610 preload.go:173] Found /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 04:42:30.933840   18610 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:42:30.933913   18610 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/newest-cni-332000/config.json ...
	I0610 04:42:30.934441   18610 start.go:360] acquireMachinesLock for newest-cni-332000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:30.934470   18610 start.go:364] duration metric: took 22.834µs to acquireMachinesLock for "newest-cni-332000"
	I0610 04:42:30.934478   18610 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:42:30.934484   18610 fix.go:54] fixHost starting: 
	I0610 04:42:30.934597   18610 fix.go:112] recreateIfNeeded on newest-cni-332000: state=Stopped err=<nil>
	W0610 04:42:30.934605   18610 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:42:30.938678   18610 out.go:177] * Restarting existing qemu2 VM for "newest-cni-332000" ...
	I0610 04:42:30.946762   18610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:08:ef:b0:7d:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2
	I0610 04:42:30.948746   18610 main.go:141] libmachine: STDOUT: 
	I0610 04:42:30.948766   18610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:30.948796   18610 fix.go:56] duration metric: took 14.311334ms for fixHost
	I0610 04:42:30.948801   18610 start.go:83] releasing machines lock for "newest-cni-332000", held for 14.326583ms
	W0610 04:42:30.948809   18610 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:30.948838   18610 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:30.948843   18610 start.go:728] Will try again in 5 seconds ...
	I0610 04:42:35.950987   18610 start.go:360] acquireMachinesLock for newest-cni-332000: {Name:mk102f8d8e4530f8d8c69dfac7835da10aaa46e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 04:42:35.951492   18610 start.go:364] duration metric: took 398.875µs to acquireMachinesLock for "newest-cni-332000"
	I0610 04:42:35.951632   18610 start.go:96] Skipping create...Using existing machine configuration
	I0610 04:42:35.951655   18610 fix.go:54] fixHost starting: 
	I0610 04:42:35.952368   18610 fix.go:112] recreateIfNeeded on newest-cni-332000: state=Stopped err=<nil>
	W0610 04:42:35.952396   18610 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 04:42:35.957817   18610 out.go:177] * Restarting existing qemu2 VM for "newest-cni-332000" ...
	I0610 04:42:35.964893   18610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:08:ef:b0:7d:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19052-14289/.minikube/machines/newest-cni-332000/disk.qcow2
	I0610 04:42:35.974952   18610 main.go:141] libmachine: STDOUT: 
	I0610 04:42:35.975015   18610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 04:42:35.975104   18610 fix.go:56] duration metric: took 23.4515ms for fixHost
	I0610 04:42:35.975121   18610 start.go:83] releasing machines lock for "newest-cni-332000", held for 23.60325ms
	W0610 04:42:35.975300   18610 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 04:42:35.982739   18610 out.go:177] 
	W0610 04:42:35.985782   18610 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 04:42:35.985834   18610 out.go:239] * 
	* 
	W0610 04:42:35.988244   18610 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:42:35.996748   18610 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-332000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (67.960541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-211000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (32.368625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-211000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-211000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-211000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.85775ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-211000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-211000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (29.193417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-211000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (29.156291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-211000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-211000 --alsologtostderr -v=1: exit status 83 (40.331792ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-211000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-211000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:42:32.899988   18629 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:42:32.900361   18629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:32.900366   18629 out.go:304] Setting ErrFile to fd 2...
	I0610 04:42:32.900368   18629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:32.900553   18629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:42:32.900807   18629 out.go:298] Setting JSON to false
	I0610 04:42:32.900819   18629 mustload.go:65] Loading cluster: default-k8s-diff-port-211000
	I0610 04:42:32.901168   18629 config.go:182] Loaded profile config "default-k8s-diff-port-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:32.905651   18629 out.go:177] * The control-plane node default-k8s-diff-port-211000 host is not running: state=Stopped
	I0610 04:42:32.908604   18629 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-211000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-211000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (28.60725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (29.305542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-211000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-332000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (30.0075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-332000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-332000 --alsologtostderr -v=1: exit status 83 (39.49325ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-332000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-332000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:42:36.180734   18659 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:42:36.180892   18659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:36.180895   18659 out.go:304] Setting ErrFile to fd 2...
	I0610 04:42:36.180897   18659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:42:36.181029   18659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:42:36.181252   18659 out.go:298] Setting JSON to false
	I0610 04:42:36.181258   18659 mustload.go:65] Loading cluster: newest-cni-332000
	I0610 04:42:36.181454   18659 config.go:182] Loaded profile config "newest-cni-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:42:36.184471   18659 out.go:177] * The control-plane node newest-cni-332000 host is not running: state=Stopped
	I0610 04:42:36.187275   18659 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-332000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-332000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (30.035625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (30.613958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.1/json-events 10.19
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.08
18 TestDownloadOnly/v1.30.1/DeleteAll 0.23
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 9.82
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 5.81
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.03
55 TestFunctional/serial/CacheCmd/cache/add_local 1.18
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.21
71 TestFunctional/parallel/DryRun 0.23
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.11
93 TestFunctional/parallel/License 0.64
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
107 TestFunctional/parallel/ProfileCmd/profile_list 0.1
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
112 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/ImageCommands/Setup 2.31
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.09
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
238 TestStoppedBinaryUpgrade/Setup 2.07
240 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.06
258 TestNoKubernetes/serial/ProfileList 0.18
259 TestNoKubernetes/serial/Stop 3.58
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
275 TestStartStop/group/old-k8s-version/serial/Stop 3.43
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 3.12
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
297 TestStartStop/group/embed-certs/serial/Stop 2.02
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.95
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/newest-cni/serial/DeployApp 0
314 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.05
315 TestStartStop/group/newest-cni/serial/Stop 3.05
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-586000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-586000: exit status 85 (94.483625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:15 PDT |          |
	|         | -p download-only-586000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 04:15:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 04:15:56.387407   14787 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:15:56.387601   14787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:15:56.387604   14787 out.go:304] Setting ErrFile to fd 2...
	I0610 04:15:56.387606   14787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:15:56.387727   14787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	W0610 04:15:56.387825   14787 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19052-14289/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19052-14289/.minikube/config/config.json: no such file or directory
	I0610 04:15:56.389118   14787 out.go:298] Setting JSON to true
	I0610 04:15:56.407434   14787 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8127,"bootTime":1718010029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:15:56.407515   14787 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:15:56.413097   14787 out.go:97] [download-only-586000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:15:56.416076   14787 out.go:169] MINIKUBE_LOCATION=19052
	W0610 04:15:56.413192   14787 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 04:15:56.413246   14787 notify.go:220] Checking for updates...
	I0610 04:15:56.424042   14787 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:15:56.427061   14787 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:15:56.428635   14787 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:15:56.432104   14787 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	W0610 04:15:56.438056   14787 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 04:15:56.438325   14787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:15:56.442005   14787 out.go:97] Using the qemu2 driver based on user configuration
	I0610 04:15:56.442026   14787 start.go:297] selected driver: qemu2
	I0610 04:15:56.442042   14787 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:15:56.442133   14787 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:15:56.445034   14787 out.go:169] Automatically selected the socket_vmnet network
	I0610 04:15:56.450506   14787 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 04:15:56.450617   14787 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 04:15:56.450671   14787 cni.go:84] Creating CNI manager for ""
	I0610 04:15:56.450690   14787 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 04:15:56.450744   14787 start.go:340] cluster config:
	{Name:download-only-586000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-586000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:15:56.455479   14787 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:15:56.460041   14787 out.go:97] Downloading VM boot image ...
	I0610 04:15:56.460076   14787 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso
	I0610 04:16:04.036447   14787 out.go:97] Starting "download-only-586000" primary control-plane node in "download-only-586000" cluster
	I0610 04:16:04.036472   14787 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:16:04.127660   14787 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 04:16:04.127686   14787 cache.go:56] Caching tarball of preloaded images
	I0610 04:16:04.127900   14787 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:16:04.133090   14787 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0610 04:16:04.133101   14787 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:04.365315   14787 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 04:16:13.073750   14787 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:13.073919   14787 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:13.769012   14787 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 04:16:13.769226   14787 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/download-only-586000/config.json ...
	I0610 04:16:13.769247   14787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/download-only-586000/config.json: {Name:mke2151e3aeea21948ac232c5b18ed83ea85d69a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:16:13.769503   14787 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 04:16:13.770490   14787 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0610 04:16:14.156017   14787 out.go:169] 
	W0610 04:16:14.162128   14787 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19052-14289/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900 0x106e49900] Decompressors:map[bz2:0x140007679b0 gz:0x140007679b8 tar:0x14000767960 tar.bz2:0x14000767970 tar.gz:0x14000767980 tar.xz:0x14000767990 tar.zst:0x140007679a0 tbz2:0x14000767970 tgz:0x14000767980 txz:0x14000767990 tzst:0x140007679a0 xz:0x140007679c0 zip:0x140007679d0 zst:0x140007679c8] Getters:map[file:0x14000062c60 http:0x14000884280 https:0x140008842d0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0610 04:16:14.162153   14787 out_reason.go:110] 
	W0610 04:16:14.170025   14787 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 04:16:14.173998   14787 out.go:169] 
	
	
	* The control-plane node download-only-586000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-586000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-586000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (10.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-791000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-791000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 : (10.190061083s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (10.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-791000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-791000: exit status 85 (76.582625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:15 PDT |                     |
	|         | -p download-only-586000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| delete  | -p download-only-586000        | download-only-586000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT | 10 Jun 24 04:16 PDT |
	| start   | -o=json --download-only        | download-only-791000 | jenkins | v1.33.1 | 10 Jun 24 04:16 PDT |                     |
	|         | -p download-only-791000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 04:16:14
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 04:16:14.832746   14825 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:16:14.832884   14825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:16:14.832887   14825 out.go:304] Setting ErrFile to fd 2...
	I0610 04:16:14.832890   14825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:16:14.833009   14825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:16:14.834036   14825 out.go:298] Setting JSON to true
	I0610 04:16:14.850146   14825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8145,"bootTime":1718010029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:16:14.850212   14825 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:16:14.854906   14825 out.go:97] [download-only-791000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:16:14.859852   14825 out.go:169] MINIKUBE_LOCATION=19052
	I0610 04:16:14.855010   14825 notify.go:220] Checking for updates...
	I0610 04:16:14.866821   14825 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:16:14.869868   14825 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:16:14.872813   14825 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:16:14.875814   14825 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	W0610 04:16:14.880369   14825 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 04:16:14.880547   14825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:16:14.883776   14825 out.go:97] Using the qemu2 driver based on user configuration
	I0610 04:16:14.883782   14825 start.go:297] selected driver: qemu2
	I0610 04:16:14.883785   14825 start.go:901] validating driver "qemu2" against <nil>
	I0610 04:16:14.883821   14825 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 04:16:14.886830   14825 out.go:169] Automatically selected the socket_vmnet network
	I0610 04:16:14.891886   14825 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 04:16:14.891977   14825 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 04:16:14.891994   14825 cni.go:84] Creating CNI manager for ""
	I0610 04:16:14.892002   14825 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 04:16:14.892010   14825 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 04:16:14.892064   14825 start.go:340] cluster config:
	{Name:download-only-791000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:16:14.896286   14825 iso.go:125] acquiring lock: {Name:mkea143ffb57a8d7528b01730a37b65716043d31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 04:16:14.898855   14825 out.go:97] Starting "download-only-791000" primary control-plane node in "download-only-791000" cluster
	I0610 04:16:14.898862   14825 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:16:15.110641   14825 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:16:15.110722   14825 cache.go:56] Caching tarball of preloaded images
	I0610 04:16:15.112509   14825 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:16:15.117390   14825 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0610 04:16:15.117419   14825 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:15.324605   14825 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4?checksum=md5:7ffd0655905ace939b15286e37914582 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 04:16:23.097939   14825 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:23.098109   14825 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0610 04:16:23.639938   14825 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 04:16:23.640125   14825 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/download-only-791000/config.json ...
	I0610 04:16:23.640140   14825 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19052-14289/.minikube/profiles/download-only-791000/config.json: {Name:mkca8645b7aed84f5a569dd1743edefc836dfca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 04:16:23.641260   14825 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 04:16:23.641373   14825 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19052-14289/.minikube/cache/darwin/arm64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-791000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-791000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-791000
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-722000 --alsologtostderr --binary-mirror http://127.0.0.1:52803 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-722000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-722000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-057000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-057000: exit status 85 (56.474625ms)

                                                
                                                
-- stdout --
	* Profile "addons-057000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-057000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-057000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-057000: exit status 85 (60.422042ms)

                                                
                                                
-- stdout --
	* Profile "addons-057000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-057000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.82s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.82s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status: exit status 7 (31.437959ms)

                                                
                                                
-- stdout --
	nospam-972000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status: exit status 7 (29.714583ms)

                                                
                                                
-- stdout --
	nospam-972000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status: exit status 7 (29.709583ms)

                                                
                                                
-- stdout --
	nospam-972000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause: exit status 83 (37.634167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-972000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause: exit status 83 (39.562375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-972000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause: exit status 83 (38.842709ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-972000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause: exit status 83 (40.705125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-972000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause: exit status 83 (39.006541ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-972000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause: exit status 83 (40.74975ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-972000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (5.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 stop: (1.886576666s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 stop: (2.039629083s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-972000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-972000 stop: (1.877128s)
--- PASS: TestErrorSpam/stop (5.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19052-14289/.minikube/files/etc/test/nested/copy/14783/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:3.1: (1.030300667s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:3.3: (1.118079417s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local58212329/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache add minikube-local-cache-test:functional-296000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 cache delete minikube-local-cache-test:functional-296000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-296000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 config get cpus: exit status 14 (29.885875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 config get cpus: exit status 14 (31.508458ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.574541ms)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:17:55.022951   15305 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:17:55.023090   15305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:17:55.023094   15305 out.go:304] Setting ErrFile to fd 2...
	I0610 04:17:55.023096   15305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:17:55.023247   15305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:17:55.024241   15305 out.go:298] Setting JSON to false
	I0610 04:17:55.040372   15305 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8246,"bootTime":1718010029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:17:55.040444   15305 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:17:55.042483   15305 out.go:177] * [functional-296000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 04:17:55.049445   15305 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:17:55.049480   15305 notify.go:220] Checking for updates...
	I0610 04:17:55.057381   15305 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:17:55.061396   15305 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:17:55.064379   15305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:17:55.067458   15305 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:17:55.070389   15305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:17:55.072147   15305 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:17:55.072410   15305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:17:55.076408   15305 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 04:17:55.083233   15305 start.go:297] selected driver: qemu2
	I0610 04:17:55.083237   15305 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:17:55.083285   15305 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:17:55.089386   15305 out.go:177] 
	W0610 04:17:55.093412   15305 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 04:17:55.097399   15305 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-296000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-296000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.355ms)

                                                
                                                
-- stdout --
	* [functional-296000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 04:17:54.904421   15301 out.go:291] Setting OutFile to fd 1 ...
	I0610 04:17:54.904523   15301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:17:54.904527   15301 out.go:304] Setting ErrFile to fd 2...
	I0610 04:17:54.904529   15301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 04:17:54.904680   15301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19052-14289/.minikube/bin
	I0610 04:17:54.906126   15301 out.go:298] Setting JSON to false
	I0610 04:17:54.923351   15301 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8245,"bootTime":1718010029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 04:17:54.923421   15301 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 04:17:54.927463   15301 out.go:177] * [functional-296000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	I0610 04:17:54.935390   15301 out.go:177]   - MINIKUBE_LOCATION=19052
	I0610 04:17:54.938378   15301 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	I0610 04:17:54.935449   15301 notify.go:220] Checking for updates...
	I0610 04:17:54.945384   15301 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 04:17:54.948390   15301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 04:17:54.951441   15301 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	I0610 04:17:54.954328   15301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 04:17:54.957699   15301 config.go:182] Loaded profile config "functional-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 04:17:54.957970   15301 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 04:17:54.962431   15301 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0610 04:17:54.969408   15301 start.go:297] selected driver: qemu2
	I0610 04:17:54.969414   15301 start.go:901] validating driver "qemu2" against &{Name:functional-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 04:17:54.969481   15301 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 04:17:54.975225   15301 out.go:177] 
	W0610 04:17:54.979429   15301 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 04:17:54.983455   15301 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "67.739667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.880875ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "69.131541ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.571125ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.267726125s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-296000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image rm gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-296000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 image save --daemon gcr.io/google-containers/addon-resizer:functional-296000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-296000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013140916s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-296000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-296000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-296000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-296000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-068000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-068000 --output=json --user=testUser: (3.085657375s)
--- PASS: TestJSONOutput/stop/Command (3.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-360000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-360000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.910042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e2ff4263-6a2c-4920-bfbd-9863515e9c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-360000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ecee411-055a-4d95-9ad2-a348732ad1a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19052"}}
	{"specversion":"1.0","id":"060cc875-d0e7-4833-b162-db153008828b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig"}}
	{"specversion":"1.0","id":"94593f60-a585-4634-97ff-5ebb548fb010","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"66ba1eb4-746d-48d6-9149-04b54aeea77a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d76ee0fc-825a-41ab-af1e-82692173fb41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube"}}
	{"specversion":"1.0","id":"d4dd67c8-c6d0-4831-a750-9e945d468877","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"45f5d581-6a3b-4d7a-bc76-99ee7307478c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-360000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-227000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-448000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-448000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.762916ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-448000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19052-14289/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-448000 "sudo systemctl is-active --quiet service kubelet"
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19052
- KUBECONFIG=/Users/jenkins/minikube-integration/19052-14289/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-448000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (58.074083ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-448000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-448000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current146650160/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-448000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-448000: (3.578099959s)
--- PASS: TestNoKubernetes/serial/Stop (3.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-448000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-448000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.569542ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-448000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-448000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-278000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-278000 --alsologtostderr -v=3: (3.425249209s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-278000 -n old-k8s-version-278000: exit status 7 (59.580416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-278000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-335000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-335000 --alsologtostderr -v=3: (3.11726625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-335000 -n no-preload-335000: exit status 7 (58.026208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-335000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-601000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-601000 --alsologtostderr -v=3: (2.022288292s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (55.07575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-601000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-211000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-211000 --alsologtostderr -v=3: (1.946209583s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-211000 -n default-k8s-diff-port-211000: exit status 7 (55.316459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-211000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-332000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-332000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-332000 --alsologtostderr -v=3: (3.050379083s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-332000 -n newest-cni-332000: exit status 7 (57.531625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-332000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port85370228/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718018242297751000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port85370228/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718018242297751000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port85370228/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718018242297751000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port85370228/001/test-1718018242297751000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (55.503833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.100459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.765208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.812833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.550666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.501125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.551958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo umount -f /mount-9p": exit status 83 (48.573166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port85370228/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2070848403/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.396292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.239459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.979125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.854125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.641125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.478333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.14775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "sudo umount -f /mount-9p": exit status 83 (49.295208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-296000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2070848403/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2536245515/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2536245515/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2536245515/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (79.392334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (84.712167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (86.725917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (86.633833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (86.427792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (84.952ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-296000 ssh "findmnt -T" /mount1: exit status 83 (87.159541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-296000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2536245515/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2536245515/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-296000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2536245515/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.34s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-463000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-463000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-463000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-463000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-463000"

                                                
                                                
----------------------- debugLogs end: cilium-463000 [took: 2.310944125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-463000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-463000
--- SKIP: TestNetworkPlugins/group/cilium (2.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-923000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-923000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard